Mount Windows share on Ubuntu

Some notes on setting up and auto mounting a Windows Share on an Ubuntu host.

I’ve had to Google the details for this more than once, so thought I’d write up the steps here for next time…

First, if it’s not there already, add the Windows IP address and HostName  to the Ubuntu /etc/hosts file:

vim /etc/hosts, then add something like:
192.168.0.123 MyWindowsHostName

 

Now install the smbfs packages if you don’t already have them:

sudo apt-get install smbfs
sudo apt-get install smbclient

 

Once that’s complete, and assuming the windows shares are set up ok (and there are no firewall issues, and you can ping the host etc), check that we can view the host and its shares with smbclient:

root@linux:/mnt# smbclient -L MyWindowsHostName
Enter root’s password:
Domain=[LIMBO] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]

Sharename       Type      Comment
———                —-         ——-
Video                 Disk
IPC$                   IPC       Remote IPC
Music                 Disk

Domain=[LIMBO] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]

 

If that gives you back something that looks like your Windows hosts and its share(s), things are looking good 🙂

Further information on using smbclient can be found here if you have any problems.

 

Now it’s time to make a local mount point, so as root or via sudo;
mkdir /mnt/Video

Then as root create a password file /etc/cifspw with the login credentials for your Windows account.

username=WINDOWSUSERNAME
password=WINDOWSPASSWORD

it would be good practice to secure that file so that only the owner (root) has read/write access to it:
$ sudo chmod 600 /etc/cifspw

Then vim /etc/fstab and add a line for the mount:
//MyWindowsHostName/Video                                  /mnt/Video      cifs exec,credentials=/etc/cifspw 0 0

If all goes well, the Windows share should now automatially mount to /mnt/Video at boot time.

If you can’t wait to test it, you can do:

sudo mount -a

and check /mnt/Video to see your data… hopefully!

Cheers,

Don

 

Some Useful Solaris Commands

Here are a few (mostly) Solaris tips and tricks I have found useful and wanted to keep a note of.

 

prstat

This provides similar info to top on Linux boxes – you can run it as plain old prstat, or give it some options. I like prstat -a as it reports on both processes and users. As with all of these commands, the man pages have further details.

 

xargs

Not just a Solaris command, but this is very useful on any *NIX box – I frequently use it to translate the output from the previous command in to something that can be undertood by the next one, for example:

find . -type f -name *.txt | xargs ls -alrt

Will translate and pass the output of the “find” command to ls in a way that ls understands.

 

pargs

I use the pargs command when I need to get more information on a running process than the Solaris ps utility will give (there’s no -v option) if they have a lot of arguments.

Call pargs with the PID of your choice, and it will display a nice list of each argument that the process was started with, for example:

> pargs 16446
16446:  /usr/jdk/jdk1.6.0/jre/bin/java com.MyJavaProgram
argv[0]: /usr/jdk/jdk1.6/jre/bin/java
argv[1]: com.MyJavaProgram
argv[2]: MyFirstArgument.ini
argv[3]: SomeOtherArg.txt
argv[4]: AndAnotherArg

pargs can also display all of this info on one line with the -l option (useful for scripting), and if you call it with -e it also displays all of the Environment variables too.

 

pwdx

Simply pass it a PID and it will tell you the current working directory for that process.

 

[g]rep

When writing a shell script that queries running processes, I often find my own script showing up in the results – for instance a script that does a “ps -eaf | grep MyProcessName” may pick up the java process I’m after (the running instance of “./MyProcessName“) and the grep process-check itself (as in the “ps -eaf | grep MyProcessname“).

A handy way to avoid this is by changing your search criteria to “grep [M]yProcessName” instead. Grep interprets (and effectively ignores) the square brackets, with the result that your grep query no longer matches its own search 🙂

 

I will add more when I think of them, if you have any good ones then please post them!

Persisting file permissions in a tar.gz file with Ant & tar

Discovered an interesting issue recently where file permissions were not being preserved when Taring up a build with Ant.

The existing approach was to chmod the files as desired then simply tar them all up and hope for the best:

<tar destfile=”dist/${hudson.build.tag}.tar.gz” basedir=”dist/” compression=”gzip” />

This doesn’t work,  but if you use tarfileset and set the filemode you can explicitly set things as required like this:

<tar destfile=”dist/${hudson.build.tag}.tar.gz” longfile=”gnu” compression=”gzip”>

<tarfileset dir=”dist/” filemode=”755″>

<include name=”**/*scripts/*.sh” />

<include name=”**/somescript.ext” />

</tarfileset>

<tarfileset dir=”dist/”>

<include name=”**/*” />

<exclude name=”**/*scripts/*.sh” />

<exclude name=”**/somescript.ext” />

</tarfileset>

</tar>

Here I am adding the first two scripts with chmod 755, then adding everything else which will get the default/umask permissions. I exclude the previous files – not sure if that’s required or not but I don’t want to risk overwriting them.

Now when you gunzip and tar xvf the resulting build, you get the required permissions.

There’s more info and further examples in the Apache Ant Manual.

Cheers,

Don

Using lock files in a bash shell script

Wrote this script recently – I had written a simple shell script that updated an HTML page with its output, then realised it would be all too easy for simultaneous writes to clobber the file.

This kind of concurrency can & should really be solved properly by using a database obviously, but it got me thinking and playing around and I ended up with the below – it’s clearly very “happy path” with loads of room for improvements – please feel free to suggest some or add to it 🙂


#!/bin/bash

#
# Example script that uses lock files to avoid concurrent writes
# TODO: loads more validation and error handling!
#
# www.DonaldSimpson.co.uk
# 25th May 2011

setup_lockfile(){
# set name of this program's lockfile:
MY_NAME=`basename $0`
LOCKFILE=/tmp/lock.${MY_NAME}.$$
# MAX_AGE is how long to wait until we assume a lock file is defunct
# scary stuff, with loads of scope for improvement...
# could use fuser and see if there is a process attached/not?
# maybe check with lsof? or just bail out?
MAX_AGE=5
echo "My lockfile name is ${LOCKFILE}"
sleep 1
}

check_lock(){
# Check for an existing lock file
while [ -f /tmp/lock.${MY_NAME}* ]
do
# A lock file is present
if [[ `find /tmp/lock.* -mmin +${MAX_AGE}` > "0" ]]; then
echo "WARNING: found and removing old lock file...`ls /tmp/lock.${MY_NAME}*`"
rm -f /tmp/lock.${MY_NAME}*
else
echo "A recent lock file already exists : `ls /tmp/lock.${MY_NAME}* | awk -F. {'print $2"."$3", with PID: " $4'}`"
echo "Will wait until the lock file is over ${MAX_AGE} minutes old then remove it..."
fi
sleep 5
done
}

create_lock(){
# ok to carry on... create a lock file - quickly ;-)
touch ${LOCKFILE}
# check we managed to make it ok...
if [ ! -f ${LOCKFILE} ]; then
echo "Unable to create lockfile ${LOCKFILE}!"
exit 1
fi
echo "Created lockfile ${LOCKFILE}"
}

cleanup_lock(){
echo "Cleaning up... "
rm -f ${LOCKFILE}
if [ -f ${LOCKFILE} ]; then
echo "Unable to delete lockfile ${LOCKFILE}!"
exit 1
fi
echo "Ok, lock file ${LOCKFILE} removed."
}

setup_lockfile
check_lock
create_lock

# Any calls to exit() from here on should first call cleaup_lock
# Do main processing tasks here...
sleep 20

# All Done.
cleanup_lock

Updated to latest Jenkins

Finally got around to trying out “Jenkins”, the latest incarnation of the forked “Hudson” project, and one of my favourite tools. Jenkins looks and works very much like the latest Hudson, but as it had been a while since I last browsed the plugins there were a few nice additions.

When I first started using Hudson there were about 5 basic plugins, and the amount and quality available in the latest version of Jenkins is really impressive – obviously it depends on what sort of tasks and tools you use, but there’s something for everyone – here is a list of the ones I found most useful and interesting:

Hudson Google Calendar plugin

This plugin publishes the job status to Google Calendar.

I think this has maybe been around for a while but I hadn’t used it before – it simply publishes build info to your Google Calendar, which can be a nice way to view a summary for some jobs.

Hudson iPhoneView plugin

This plugin allows you to view the status of your jobs via iPhone or iPod touch.

Haven’t tried this out yet but it’s on my todo list!

Hudson SCP publisher plugin

This plugin uploads build artifacts to repository sites using SCP (SSH) protocol.

This is a plugin for something I would usually script in Ant or shell script – will try this instead as it should make things simpler and cleaner.

Security Realm with no CAPTCHA

Brilliant – the Hudson/Jenkins Captcha can be really tough, being able to remove it is very nice.

Jenkins SSH plugin

This plugin executes shell commands remotely using SSH protocol.

Much like the SVN plugins, this is something I would normally write by hand – this plugin lets you define hosts (with user and passwords) then select them and the task you want to run from within a job, very handy and again much less clutter – another essential.

Startup Trigger

This plugin allows you to trigger a Jenkins build when Jenkins first starts up.

Simple but useful – I have used this one on a server that is restarted nightly; it kicks off some housekeeping jobs and mails me the outcome every morning.

Hudson Subversion Tagging Plugin

This plugin automatically performs Subversion tagging (or copy) on successful build.

Another useful Subversion integration plugin – I have recently been using this in conjunction with another plugin that dynamically generates a drop-down of all available tags in a given SVN location at build time. This allows users to select the tag they want to deploy, a very powerful and useful combination.

Cheers,

Don

Using Postfix for WordPress email notifications

Here are my notes on installing and configuring Postfix on an Ubuntu host for WordPress.

By default, my WordPress and Ubuntu installation wasn’t able to send out emails to do things like set up new users, notify me about new posts, reset forgotten passwords etc etc.  Getting Postfix working is not very difficult once you’ve figured out what settings to use in the main.cf file.

First, install Postfix:

apt-get install postfix

and copy over the example config file/template:

cp /usr/share/postfix/main.cf.debian /etc/postfix/main.cf

I then realised I’d already installed Sendmail on this box (doh!) so that needed killed and cleaned up:

ps -eaf | grep [s]endmail

kill -9 {the pid}

apt-get uninstall sendmail

Now I could start up postfix:

/usr/sbin/postfix start

I’d gone with the default options during the initial install, but it looks like they need a bit of a rethink…

dpkg-reconfigure postfix

then backup and tweak this file to suit:

vi /etc/postfix/main.cf

after which you may need to do “postfix reload”

Once that looked reasonably ok I wanted to test sending mail from the command line – there was no mailx/mail tool present but the mailutils package looked worth a try:

apt-get install mailutils

this gave me a “mail” command, so the next step was to test sending myself an internal mail

echo testinternal | mail -s “test mail sent to local user” don

then an external one

echo testexternal | mail -s “test mail sent to external” myaddress@gmail.com

and all worked well – WordPress can now send out new registration details and reset passwords etc.

If you have any issues these files are worth checking:

tail -1000f /var/log/mail.warn

tail -1000f /var/log/mail.err

vi /etc/postfix/main.cf

apt-get install telnet

telnet localhost 25

 

Hope this helps – if you have any feedback or updates please add a comment below 🙂

Cheers,

Don

 

Serving WordPress as the default page

Here’s a note of what I needed to do in order to get WordPress serving as the default site on my domain – it was originally at www.donaldsimpson.co.uk/wordpress/ and I wanted it to just be www.donaldsimpson.co.uk

A bit of a Google shows there are many ways to do this, but here’s how I did it:

vi /opt/bitnami/apache2/conf/httpd.conf

then comment the current entry and add a new one pointing to the htdocs dir for WordPress:

#DocumentRoot “/opt/bitnami/apache2/htdocs”
DocumentRoot “/opt/bitnami/apps/wordpress/htdocs”

Then restart Apache (/opt/bitnami/apache2/bin/apachectl restart or similar) after which you just need to go to the WordPress Admin General Settings page and change these values to point to the root of your site/domain:

WodPress address (URL) www.donaldsimpson.co.uk

Site address (URL) www.donaldsimpson.co.uk

And that should be that – you can now delete that backup you made at the start…

 

Update:

It’s may be a good idea to define your WP_HOME and WP_SITEURL in your wp-config.php file too, like so:

define(‘WP_HOME’, ‘http://www.donaldsimpson.co.uk’);
define(‘WP_SITEURL’, ‘http://www.donaldsimpson.co.uk’);

This avoids a database lookup to get these details, which should speed things up fractionally too 🙂

 

 

Quick directory listing for large file systems

 

Useful bit of Perl code – folk at work found this useful approach on the web somewhere – it’s much quicker than doing a recursive find apparently:

 

my @dirlist = ();

sub process_files

{

my $path = shift;

opendir (DIR, $path) or die “Unable to open $path: $!”;

my @files =

map { $path . ‘\’ . $_ }

grep { !/^.{1,2}$/ }

readdir (DIR);

closedir (DIR);

for (@files)

{

if (-d $_))

{

print $_.”n”;

push @dirlist, $_;

push @files, process_files ($_);

}

}

}

process_files(“.”);


Pardon the indentation/formatting 😉

Cheers,

 

Don