In Part I, Information Radiators, I covered what they are, what the main benefits are, and the approach I usually use to set them up. This post goes in to more technical detail on how I extract this data from Jenkins.
My usual setup/architecture for Jenkins Information Radiators goes something along these lines:
And you’ll need some Jenkins instances/jobs to monitor too, obviously 🙂
The Jenkins XML API is very useful for automating tasks like this – if you simply append “/api/xml” to a
Jenkins job URL, it will serve up an XML version – note there is also a JSON API and a CLI and plenty of other options, but I’m using what suits me.
The Jenkins XML API
For example, if you go to one of your Jenkins jobs and add /api/xml like this:
“http://yourjenkinsserver:8080/job/yourjobname/api/xml”
you should get back some XML, possibly roughly like this example:
<?xml version="1.0"?> <freeStyleBuild> <action> <parameter> <name>LOWER_ENV</name> <value>dev</value> </parameter> </action> <action> <cause> <shortDescription>Started by timer</shortDescription> </cause> </action> <building>false</building> <duration>61886</duration> <fullDisplayName>MyJob #580</fullDisplayName> <id>2014-04-01_10-01-50</id> <keepLog>false</keepLog> <number>580</number> <result>SUCCESS</result> <timestamp>1396342910088</timestamp> <url>http://jenkinsserver:8080/view/MyView/job/MyJob/580/</url> <builtOn/> <changeSet/> </freeStyleBuild>
That XML contains loads of very useful information inside handy XML tag descriptions – you just need a way to get at that data and then you can present it as you like…
XPAth queries and the Jenkins XML API
so to automate that, I used to extend that approach a to query Jenkins via the XML API using XPAth queries to bring back just the data I actually wanted, quite like querying a database.
For example, wget’ing this URL would return just the current value of the <building> tag in the above XML:
http://yourjenkinsserver:8080/job/yourjobname/api/xml?xpath=//building/text()
e.g. “true” or “false” – this was very useful and easy to do, but the functionality was removed/disabled in recent versions of Jenkins for security reasons, meaning that my processes that used it needed rewritten 🙁
Extracting the data – Plan B…
So, here’s the new solution I went for – the real scripts/methods do some error handling and cleaning up etc but I’m just highlighting the main functions and the high level logic behind each of them here;
get_url’s method:
query a table in MySQL that contains a list of the job names and URL’s to monitor
for each $JOB_NAME found, it calls the get_file method, passing that the URL as a parameter.
get_file method:
this takes a URL param, and uses curl to fetch and save the XML data from that URL to a temporary file (“xmlfile”):
curl -sL "$1" | xmllint --format - > xmlfile
Note I’m using “xmllint –format” there to nicely format the XML data, which makes processing it later much easier.
get_data method:
this first calls “get_if_building” (see below) to see if the job is currently running or not, then it does:
TRUE_VAR="true" if [[ "$IS_BUILDING" == "$TRUE_VAR" ]]; then RESULT_TEXT="building..." else RESULT_TEXT=`grep "result>" xmlfile | awk -F\> '{print $2}' | awk -F\< '{print $1}'` fi
get_if_building method:
this simply checks and sets the IS_BUILDING var like so:
IS_BUILDING=`grep building xmlfile | awk -F\> '{print $2}' | awk -F\< '{print $1}'`
Putting it all together
My script then updates the MySQL database with the results from each check: success/failure, date, build number, user, change details etc
I then have JSP pages that read data from that table, and translate things like true/false in to HTML that sets the background colours (Red, Amber, Green), and shows the appropriate blocks and details per job.
If you have a few browsers/TV’s or Monitors showing these strategically placed around the office, developers get rapid feedback on the result of their code changes which speeds up development, increases quality and reduces development time and costs – and they can be fun to watch and set up too 🙂
Cheers,
Don