Setting up OpenNebula on Ubuntu

Some very rough notes on installing and configuring OpenNebula on an Ubuntu host.

On the Server host:

apt-get install opennebula

Adding system user `oneadmin’ (UID nnn) …
Adding new user `oneadmin’ (UID nnn) with group `nogroup’ …
The home directory `/var/lib/one’ already exists.  Not copying from `/etc/skel’.
adduser: Warning: The home directory `/var/lib/one’ does not belong to the user you are currently creating.
Generating public/private rsa key pair.
Your identification has been saved in /var/lib/one/.ssh/id_rsa.
Your public key has been saved in /var/lib/one/.ssh/id_rsa.pub.
The key fingerprint is:
(key fingerprint)oneadmin@linux
The key’s randomart image is:
+–[ RSA 2048]—-+
|     .    o..    |
(more random artwork)

 

test the installation by running the new “onehost” command:

 

Usage:
onehost [<options>] <command> [<parameters>]

Options:
-l, –list x,y,z                 Selects columns to display with list
command
–list-columns               Information about the columns available
to display, order or filter
-o, –order x,y,z                Order by these columns, column starting
with – means decreasing order
-f, –filter x,y,z               Filter data. An array is specified
with column=value pairs.
-d, –delay seconds              Sets the delay in seconds for top
command
-v, –verbose                    Tells more information if the command
is successful
-h, –help                       Shows this help message
–version                    Shows version and copyright information

Commands:

* create (Adds a new machine to the pool)
onehost create <hostname> <im_mad> <vmm_mad> <tm_mad>

* show (Gets info from a host)
onehost show <host_id>

* delete (Removes a machine from the pool)
onehost delete <host_id>

* list (Lists machines in the pool)
onehost list

* enable (Enables host)
onehost enable <host_id>

* disable (Disables host)
onehost disable <host_id>

* top (Lists hosts continuously)
onehost top

 

So far so simple, so it’s time to set up a new host and install the client…

 

This is done by installing the node package like so:

sudo apt-get install opennebula-node

then starting defining and adding the node to the Master instance via the onehost command.

After that’s done, you can move on to set up a private network for your cloud, create you own KVM images, and start firing up VM’s in your own personal cloud.

More detail coming soon…

Mount Windows share on Ubuntu

Some notes on setting up and auto mounting a Windows Share on an Ubuntu host.

I’ve had to Google the details for this more than once, so thought I’d write up the steps here for next time…

First, if it’s not there already, add the Windows IP address and HostName  to the Ubuntu /etc/hosts file:

vim /etc/hosts, then add something like:
192.168.0.123 MyWindowsHostName

 

Now install the smbfs packages if you don’t already have them:

sudo apt-get install smbfs
sudo apt-get install smbclient

 

Once that’s complete, and assuming the windows shares are set up ok (and there are no firewall issues, and you can ping the host etc), check that we can view the host and its shares with smbclient:

root@linux:/mnt# smbclient -L MyWindowsHostName
Enter root’s password:
Domain=[LIMBO] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]

Sharename       Type      Comment
———                —-         ——-
Video                 Disk
IPC$                   IPC       Remote IPC
Music                 Disk

Domain=[LIMBO] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]

 

If that gives you back something that looks like your Windows hosts and its share(s), things are looking good 🙂

Further information on using smbclient can be found here if you have any problems.

 

Now it’s time to make a local mount point, so as root or via sudo;
mkdir /mnt/Video

Then as root create a password file /etc/cifspw with the login credentials for your Windows account.

username=WINDOWSUSERNAME
password=WINDOWSPASSWORD

it would be good practice to secure that file so that only the owner (root) has read/write access to it:
$ sudo chmod 600 /etc/cifspw

Then vim /etc/fstab and add a line for the mount:
//MyWindowsHostName/Video                                  /mnt/Video      cifs exec,credentials=/etc/cifspw 0 0

If all goes well, the Windows share should now automatially mount to /mnt/Video at boot time.

If you can’t wait to test it, you can do:

sudo mount -a

and check /mnt/Video to see your data… hopefully!

Cheers,

Don

 

Some Useful Solaris Commands

Here are a few (mostly) Solaris tips and tricks I have found useful and wanted to keep a note of.

 

prstat

This provides similar info to top on Linux boxes – you can run it as plain old prstat, or give it some options. I like prstat -a as it reports on both processes and users. As with all of these commands, the man pages have further details.

 

xargs

Not just a Solaris command, but this is very useful on any *NIX box – I frequently use it to translate the output from the previous command in to something that can be undertood by the next one, for example:

find . -type f -name *.txt | xargs ls -alrt

Will translate and pass the output of the “find” command to ls in a way that ls understands.

 

pargs

I use the pargs command when I need to get more information on a running process than the Solaris ps utility will give (there’s no -v option) if they have a lot of arguments.

Call pargs with the PID of your choice, and it will display a nice list of each argument that the process was started with, for example:

> pargs 16446
16446:  /usr/jdk/jdk1.6.0/jre/bin/java com.MyJavaProgram
argv[0]: /usr/jdk/jdk1.6/jre/bin/java
argv[1]: com.MyJavaProgram
argv[2]: MyFirstArgument.ini
argv[3]: SomeOtherArg.txt
argv[4]: AndAnotherArg

pargs can also display all of this info on one line with the -l option (useful for scripting), and if you call it with -e it also displays all of the Environment variables too.

 

pwdx

Simply pass it a PID and it will tell you the current working directory for that process.

 

[g]rep

When writing a shell script that queries running processes, I often find my own script showing up in the results – for instance a script that does a “ps -eaf | grep MyProcessName” may pick up the java process I’m after (the running instance of “./MyProcessName“) and the grep process-check itself (as in the “ps -eaf | grep MyProcessname“).

A handy way to avoid this is by changing your search criteria to “grep [M]yProcessName” instead. Grep interprets (and effectively ignores) the square brackets, with the result that your grep query no longer matches its own search 🙂

 

I will add more when I think of them, if you have any good ones then please post them!

Persisting file permissions in a tar.gz file with Ant & tar

Discovered an interesting issue recently where file permissions were not being preserved when Taring up a build with Ant.

The existing approach was to chmod the files as desired then simply tar them all up and hope for the best:

<tar destfile=”dist/${hudson.build.tag}.tar.gz” basedir=”dist/” compression=”gzip” />

This doesn’t work,  but if you use tarfileset and set the filemode you can explicitly set things as required like this:

<tar destfile=”dist/${hudson.build.tag}.tar.gz” longfile=”gnu” compression=”gzip”>

<tarfileset dir=”dist/” filemode=”755″>

<include name=”**/*scripts/*.sh” />

<include name=”**/somescript.ext” />

</tarfileset>

<tarfileset dir=”dist/”>

<include name=”**/*” />

<exclude name=”**/*scripts/*.sh” />

<exclude name=”**/somescript.ext” />

</tarfileset>

</tar>

Here I am adding the first two scripts with chmod 755, then adding everything else which will get the default/umask permissions. I exclude the previous files – not sure if that’s required or not but I don’t want to risk overwriting them.

Now when you gunzip and tar xvf the resulting build, you get the required permissions.

There’s more info and further examples in the Apache Ant Manual.

Cheers,

Don

Using lock files in a bash shell script

This post is old (2011), there are many better ways to do this.
See https://www.unix.com/man-page/linux/1/flock/ for one example.
Also pgrep and lsof examples here:
https://www.baeldung.com/linux/bash-ensure-instance-running

Wrote this script recently – I had written a simple shell script that updated an HTML page with its output, then realised it would be all too easy for simultaneous writes to clobber the file.

This kind of concurrency can & should really be solved properly by using a database obviously, but it got me thinking and playing around and I ended up with the below – it’s clearly very “happy path” with loads of room for improvements – please feel free to suggest some or add to it 🙂


#!/bin/bash

#
# Example script that uses lock files to avoid concurrent writes
# TODO: loads more validation and error handling!
#
# www.DonaldSimpson.co.uk
# 25th May 2011

setup_lockfile(){
# set name of this program’s lockfile:
MY_NAME=`basename $0`
LOCKFILE=/tmp/lock.${MY_NAME}.$$
# MAX_AGE is how long to wait until we assume a lock file is defunct
# scary stuff, with loads of scope for improvement…
# could use fuser and see if there is a process attached/not?
# maybe check with lsof? or just bail out?
MAX_AGE=5
echo “My lockfile name is ${LOCKFILE}”
sleep 1
}

check_lock(){
# Check for an existing lock file
while [ -f /tmp/lock.${MY_NAME}* ]
do
# A lock file is present
if [[ `find /tmp/lock.* -mmin +${MAX_AGE}` > “0” ]]; then
echo “WARNING: found and removing old lock file…`ls /tmp/lock.${MY_NAME}*`”
rm -f /tmp/lock.${MY_NAME}*
else
echo “A recent lock file already exists : `ls /tmp/lock.${MY_NAME}* | awk -F. {‘print $2″.”$3″, with PID: ” $4’}`”
echo “Will wait until the lock file is over ${MAX_AGE} minutes old then remove it…”
fi
sleep 5
done
}

create_lock(){
# ok to carry on… create a lock file – quickly 😉
touch ${LOCKFILE}
# check we managed to make it ok…
if [ ! -f ${LOCKFILE} ]; then
echo “Unable to create lockfile ${LOCKFILE}!”
exit 1
fi
echo “Created lockfile ${LOCKFILE}”
}

cleanup_lock(){
echo “Cleaning up… ”
rm -f ${LOCKFILE}
if [ -f ${LOCKFILE} ]; then
echo “Unable to delete lockfile ${LOCKFILE}!”
exit 1
fi
echo “Ok, lock file ${LOCKFILE} removed.”
}

setup_lockfile
check_lock
create_lock

# Any calls to exit() from here on should first call cleaup_lock
# Do main processing tasks here…
sleep 20


# All Done.
cleanup_lock

Using Postfix for WordPress email notifications

Here are my notes on installing and configuring Postfix on an Ubuntu host for WordPress.

By default, my WordPress and Ubuntu installation wasn’t able to send out emails to do things like set up new users, notify me about new posts, reset forgotten passwords etc etc.  Getting Postfix working is not very difficult once you’ve figured out what settings to use in the main.cf file.

First, install Postfix:

apt-get install postfix

and copy over the example config file/template:

cp /usr/share/postfix/main.cf.debian /etc/postfix/main.cf

I then realised I’d already installed Sendmail on this box (doh!) so that needed killed and cleaned up:

ps -eaf | grep [s]endmail

kill -9 {the pid}

apt-get uninstall sendmail

Now I could start up postfix:

/usr/sbin/postfix start

I’d gone with the default options during the initial install, but it looks like they need a bit of a rethink…

dpkg-reconfigure postfix

then backup and tweak this file to suit:

vi /etc/postfix/main.cf

after which you may need to do “postfix reload”

Once that looked reasonably ok I wanted to test sending mail from the command line – there was no mailx/mail tool present but the mailutils package looked worth a try:

apt-get install mailutils

this gave me a “mail” command, so the next step was to test sending myself an internal mail

echo testinternal | mail -s “test mail sent to local user” don

then an external one

echo testexternal | mail -s “test mail sent to external” myaddress@gmail.com

and all worked well – WordPress can now send out new registration details and reset passwords etc.

If you have any issues these files are worth checking:

tail -1000f /var/log/mail.warn

tail -1000f /var/log/mail.err

vi /etc/postfix/main.cf

apt-get install telnet

telnet localhost 25

 

Hope this helps – if you have any feedback or updates please add a comment below 🙂

Cheers,

Don

 

Serving WordPress as the default page

Here’s a note of what I needed to do in order to get WordPress serving as the default site on my domain – it was originally at www.donaldsimpson.co.uk/wordpress/ and I wanted it to just be www.donaldsimpson.co.uk

A bit of a Google shows there are many ways to do this, but here’s how I did it:

vi /opt/bitnami/apache2/conf/httpd.conf

then comment the current entry and add a new one pointing to the htdocs dir for WordPress:

#DocumentRoot “/opt/bitnami/apache2/htdocs”
DocumentRoot “/var/www/html/donwp.freemyip.com”

Then restart Apache (/opt/bitnami/apache2/bin/apachectl restart or similar) after which you just need to go to the WordPress Admin General Settings page and change these values to point to the root of your site/domain:

WodPress address (URL) www.donaldsimpson.co.uk

Site address (URL) www.donaldsimpson.co.uk

And that should be that – you can now delete that backup you made at the start…

 

Update:

It’s may be a good idea to define your WP_HOME and WP_SITEURL in your wp-config.php file too, like so:

define(‘WP_HOME’, ‘https://www.donaldsimpson.co.uk’);
define(‘WP_SITEURL’, ‘https://www.donaldsimpson.co.uk’);

This avoids a database lookup to get these details, which should speed things up fractionally too 🙂

 

 

Quick directory listing for large file systems

 

Useful bit of Perl code – folk at work found this useful approach on the web somewhere – it’s much quicker than doing a recursive find apparently:

 

my @dirlist = ();

sub process_files

{

my $path = shift;

opendir (DIR, $path) or die “Unable to open $path: $!”;

my @files =

map { $path . ‘\’ . $_ }

grep { !/^.{1,2}$/ }

readdir (DIR);

closedir (DIR);

for (@files)

{

if (-d $_))

{

print $_.”n”;

push @dirlist, $_;

push @files, process_files ($_);

}

}

}

process_files(“.”);


Pardon the indentation/formatting 😉

Cheers,

 

Don

Pin It on Pinterest