Deploying Openshift to AWS with HashiCorp Terraform and Ansible
Tuesday, Oct 16, 2018, 6:15 PM
HAYS 7 Castle St, Edinburgh EH2 3AH Edinburgh, GB
30 Members Went
In this session we look at Infrastructure as Code and Configuration as Code, as we demonstrate how to use these approaches to deploy RedHat OpenShift to AWS with HashiCorp Terraform and Ansible. We start off with configuring AWS credentials, then use HashiCorp Terraform to create the AWS infrastructure needed to deploy and run our own RedHat OpenSh…
Once that’s done, you can try it out with the customary “Hello World” example…
I’m running Docker on an Ubuntu VM, but the commands and the results are the same regardless of platform – that’s one of the main Docker concepts.
You can then check which processes (docker containers) are running using the “docker ps” command – in my example you can see that there’s one Jenkins image running. If you run “docker ps -a” you will see all containers (including stopped ones, of which I have a few on this host):
and you can check your Docker version with:
root@ubuntud:~# docker --version
Docker version 1.13.0, build 49bf474
Now that the basic setup is done, we can move on to something a little more interesting – downloading and running a “Dockerised” Jenkins container.
I’m going to use my own Dockerised Jenkins Image, and there will be more detail on that in the next post – you’re welcome to try it out too, just run this command in your terminal:
docker run -d -p 8080:8080 donaldsimpson/dockerjenkins
if you don’t happen to have my docker image cached locally (like I do) then docker will automatically download it for you from Docker Hub then run it:
That command did a quite few important things, here’s a quick explanation of them all:
docker run -d
tells docker that we want to run the container in the background so that we can carry on and do other things while it runs. The alternative is -it, for an interactive/foreground session.
docker run -d -p 8080:8080
The -p 8080:8080 tells docker to map port 8080 on the local host to port 8080 in the running container. This means that when we visit localhost:8080 the request will be passed through to the container.
docker run -d -p 8080:8080 donaldsimpson/dockerjenkins
and finally, we have the namespace and name of the Docker image we want to run – my “donaldsimpson/dockerjenkins” one – more on this later!
You can now visit port 8080 on your Docker host and see that Jenkins is up and running….
That’s Jenkins up and running and being happily served from the Docker container that was just pulled from Docker Hub – how easy was that?!
And the best thing is, it’s entirely and reliably repeatable, it’s guaranteed to work the same on all platforms that can run Docker, and you can quickly and easily update, delete, replace, change or share it with others! Ok, that’s more than one thing, but the point is that there’s a lot to like here 🙂
That’s it for this post – in the next one we will look in to the various elements that came together to make this work – the code and configuration files in my Git repo, the automated build process on Docker Hub that builds and updates the Docker Image, and how the two are related.
I wrote a while back about my troubles with Carrier Grade Nat (CGNAT), and described a solution that involved tunneling out of CGNAT using a combination of SSH and an AWS server – the full article is here.
That worked ok, but it was pretty fragile and not ideal – connections could be dropped, sessions expired, hosts rebooted etc etc. Passing data through my EC2 host is also not ideal.
My “new and improved” solution to this is to use a local tool like ngrok to create the tunnel for me. This is proving to be far simpler to manage, more reliable, and ngrok also provides a load of handy additional features too.
Here’s a very quick run through of getting it up and running on my Ubuntu VM, which sits behind CGNAT and hosts a webserver I’d like to be able to access from the outside occasionally. This is the front end to my ZoneMinder CCTV interface, but it could be anything you want to host and on any port.
After switching to a 4G broadband provider, who shall (pretty much) remain nameless, I discovered they were using Carrier-Grade NAT (aka CGNAT) on me.
There are more details on that here and here but in short, the ISP is ‘saving’ IPv4 addresses by sharing them out amongst several users and NAT’ing their connections – in much the same way as you do at home, when you port forward multiple devices using one external IP address: my home network is just one ‘device’ in a pool of their users, who are all sharing the same external IP address.
The impact of this for me is that I can no longer NAT my internal network services, as I have been given a shared pubic-facing IPv4 address. This approach may be practical for a bunch of mobile phone users wanting to check Twitter and Facebook, but it sucks big time for gamers or anyone else wanting to connect things from their home network to the internet. So, rather than having “Everything Everywhere” through my very expensive new 4G connection – with 12 months contract – it turns out I get “not much to anywhere“.
Point being; I would like to be able to check my internal servers and websites when I’m away – especially my ZoneMinder CCTV setup – but my home broadband no longer has its own internet address. So an alternative solution had to be found…
I basically use 2 servers, the one at home (unhelpfully now stuck behind my ISPs CGNAT) and one in the Amazon Cloud (my public facing AWS web server with DNS), and create a reverse SSH Tunnel between them. Plus a couple of essential tweaks you wont find out about if you don’t read any further 🙂
Step 1 – create the reverse SSH tunnel:
This is initiated on the internal/home server, and connects outwards to the AWS host on the internet, like so.
Here is an explanation of each part of this command:
-N (from the SSH man page) “Do not execute a remote command. This is useful for just forwarding ports.”
-R (from the SSH man page) “Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the given host and port, or Unix socket, on the local side.”
8888:localhost:80 – means, create the reverse tunnel from localhost port 80 (my ZoneMinder web app) to port 8888 on the destination host. This doesn’t look right to me, but it’s what’s needed for a reverse tunnel
the -i and everything after it is just me connecting to my AWS host as my user with an identity file. YMMV, whatever you nornally do should be fine.
When you run this command you should not see any issues or warnings. You need to leave it running using whatever method you like – personally I like screen for this kind of thing, and will also be setting up Jenkins jobs later (below).
Step 2 – check on the AWS host
With that SSH command still running on your local server you should now be able to connect to the web app from your remote AWS Web Server, by reading from port 8888 with curl or wget.
This is a worthwhile check to perform at this point, before moving on to the next 2 steps – for example:
This shows that port 8888 on my AWS server is currently connected to the ZoneMinder application that’s running on port 80 of my home web server. A good sign.
Step 3 – configure AWS Security & Ports
Progress is being made, but in order to be able to hit that port with a browser and have things work as I’d like, I still need to configure AWS to allow incomming connections to the newly chosen port 8888.
This is done through the Amazon EC2 Management Console using the left hand menu item “Network & Security” then “Security Groups”:
This should load your current Security Groups, which you can click on to Edit. You may have a few to check.
Now select Add and configure a new Inbound rule something like so:
It’s the “Custom TCP Rule” second from the bottom, with port 8888 and “Anywhere” and “0.0.0.0/0” as the source in my picture. Don’t go for the HTTP option – unless you’re sure that’s what you want 🙂
Step 4 – configure SSH on AWS host
At this point I thought I was done… but it didn’t work and I couldn’t immediately see why, as the wget check was all good.
Some head scratching and checking of firewalls later, I realised it was most likely to be permissions on the port I was tunneling – it’s not very likely to be exposed and world readable by default, is it? Doh.
to that file, checking that there’s no existing reference to GatewayPorts – edit this file carefully and at your own risk.
As I understand it – which may best be described as ‘loosely’ – the reason this worked when I tested with wget earlier is because I was connecting to the loopback interface; this change to sshd binds the port to all interfaces. See the detailed answer on this post for further detail, including ways to limit this to specific users.
Once that’s done, restart sshd with
service ssh restart
and you should now be able to connect by pointing a web browser at port 8888 (or whatever you set) of your AWS web server and see your app responding from the other end:
Step 5 – automate it with Jenkins
The final step for me is to wrap this (the ssh tunnel creation part) up in a Jenkins job running on my home server.
This is useful for a number of reasons, such as avoiding and resetting defunct/stale connections and enabling scheduling – i.e. I can have the port forwarded when I want it, and have it shutdown during the hours I don’t.