Hello! My name is
Owen Griffin
and this is my web site.
I'm a technical arhitect at
Symphony Teleca
based in Reading,
This is where I keep my notes about technologies and stuff I find interesting. I also tweet sparingly and code.
If I've had the pleasure of working with you, let's connect.
If you already knew that I drink to much Diet Coke, and spend a lot of my time running up and down hills?

How to list modified photos in Shotwell

Shotwall is a non-destructive photo viewer for Linux. This is great, because it stops you destroying your photos. However it’s not possible to list the photos which you’ve modified. Very annoying.

To list the photos you’ve modified you have to jump into the SQLite database:

sqlite3 ~/.shotwell/data/photo.db

There is a table within the database which lists the photos in your collection. The “transformations” field within this table contains any modifications which you may have made. So the following SQL query will list the modified photos:

SELECT filename FROM PhotoTable WHERE transformations != ""; filename

Go! Nanoc! Go! - Gist data source

This is my third article on using the Nanoc3 static site generator. The other topics in this mini-series include

  • Integrating the Compass CSS framework
  • Using Haml templates

This article shows how to create display GitHub Gist’s in Nanoc3. Gists are code snippets hosted on GitHub. I’ve started accumulating a few of them, and I thought it would be cool to list them on my web site.

Nanoc3 includes the concept of DataSources. As the name suggests this is where Nanoc3 sources all of the “items” to display in the site. By default the main data source is the filesystem. All the pages and posts are loaded from the content folder.

Before creating a DataSource I need to load my Gists from GitHub. This can be very easily done using HTTParty. I have placed all of this code within the lib folder of my site as gist_datasouce.rb. You can see the complete code for this on GitHub.

require "httparty"

class Gist
  include HTTParty
  base_uri 'http://gist.github.com/'

  def list(username)
    self.class.get('/api/v1/json/gists/' + username)['gists']

  def contents(id, filename)
    self.class.get('/raw/' + id + '/' + filename)

The above code creates a new class called Gist which has two methods:

  • list which returns all the Gists associated with a username
  • contents which returns the content of a specified Gist

Now on to the DataSource…

Firstly extend the Nanoc3::DataSource class and create an identity. Then override the items method. The items method will be invoked by Nanoc3 to obtain a list of new items from this data source.

In the following code I download a list of Gists from GitHub, and for each of them I create a new Gist.

class GistDataSource < Nanoc3::DataSource
  identifier :gist

  def items
    items = []  
    api = Gist.new
    api.list(self.config[:username]).each do |gist|
      attributes = {
        :url => 'http://gist.github.com/' + gist['repo'],
        :title => 'Gist #' + gist['repo'] + ': ' + gist['description'],
        :author => gist['owner'],
        :created_at => gist['created_at'],
        :kind => 'gist'
      items << Nanoc3::Item.new(gist['description'], attributes, '/gist/' + gist['repo'])

You may have noticed the config variable. This contains options loaded from the site’s config.yaml file. In this case it is the GitHub username.

The config.yaml file is also used to enable the data source:

    type: filesystem_unified
    items_root: /
    layouts_root: /
    type: gist
      username: 'owengriffin'

With the above code the Gists should be loaded into the site’s collection of items. You’ll need to replace the username with your own.

Now how to display them. Firstly I added the following helper which will return all of Gist items in the site:

def gists
  @items.select { |item| item[:kind] == 'gist' }

Then within my site layouts I added the following Haml to my templates:

%h2 Gists
    - gists.each do |gist|
        %a(href="#{gist.path}")= gist[:title]

Last but not least I wanted to create a page for each of my Gists. This required creating a new rule and layout.

All Gist items created by my data source will be placed within the /gists/ path on my site.

compile '/gist/*' do
  filter :kramdown
  layout 'kind_gist'

Notice that above I’m using a kind_gist layout. This layout displays the title of the Gist and embeds the Gist into the page using the GitHub’s Javascript code.

    = item[:title]

The regular expression just extracts the Gist identifier from the pathname.

Go! Nanoc! Go! - Haml templates

This is my second article on using the Nanoc3 static site generator. I’ve previously written about integrating the Compass CSS framework with Nanoc. This time I’m documenting how to use Haml templates.

Haml is a language used for generating HTML. It’s rather nice because it gently co-erces the developer into generating well-structured markup. It’s also designed to make life easier with various little enhancements.

Haml templates are well-supported in the Nanoc. Take the basic site I created for my first article compass_tutorial. Add the following code to layouts/default.haml.

  %meta(name="generator" content="nanoc 3.1.6")
    A Brand New nanoc Site - 
    = @item[:title]
  %link(href="/stylesheets/screen.css" media="screen, projection" rel="stylesheet" type="text/css")
  %link(href="/stylesheets/print.css" media="print" rel="stylesheet" type="text/css")
  /[if IE]
    %link(href="/stylesheets/ie.css" media="screen, projection" rel="stylesheet" type="text/css")
      = yield
      %h2 Documentation
          %a(href="http://nanoc.stoneship.org/docs/") Documentation
          %a(href="http://nanoc.stoneship.org/docs/3-getting-started/") Getting Started
      %h2 Community
          %a(href="http://groups.google.com/group/nanoc/") Discussion Group
          %a(href="irc://chat.freenode.net/#nanoc") IRC Channel
          %a(href="http://projects.stoneship.org/trac/nanoc/") Wiki

The above is a copy of the default Nanoc3 template, with the Compass stylesheets added, as per the previous post, converted into Haml.

Now within the Rules file you need to change the default filter to be Haml, not erb. This can be done by changing the following line:

layout '*', :erb

It should now look like the following:

layout '*', :haml

Remove the layouts/default.html file, because it is now no longer used.

Recompile the site and view it:

nanoc3 co
nanoc3 view

When you open http://localhost:3000/ in a web browser should find that nothing has changed.

One final note: If you start to use Kramdown and Coderay for your Nanoc3 documents ensure that you enable the Haml ugly option. By default Haml formats your content so that it has lovely spacing. This is quite nice when viewing the HTML output of your pages. However the whitespace is also added to the contents of any <pre> tag, screwing with any formatting on the page. Very annoying if you have lots of code blocks. To append any options to the filter add the following:

layout '*', :haml, { :format => :html5, :ugly => true }

Go! Nanoc! Go! - Using Compass

Here follows my quick and dirty notes on integrating Nanoc3 and Compass.

Nanoc3 is a static site generator, and Compass is a stylesheet authoring framework. And if you don’t know what they are, then you’re probably not going to find the following very useful.

Create a new site

For this quick tutorial, I’m going to assume you want to create a new site - just in case you accidently destroy an existing one.

nanoc3 create_site compass_tutorial

Compass configuration

Create the file compass_tutorial/config.rb and enter the following code:

http_path    = "/" 
project_path = "." 
css_dir      = "output/stylesheets/" 
sass_dir     = "content/stylesheets/" 
images_dir   = "output/images/"

sass_options = {
  :syntax => :scss


Open compass_tutorial/Rules and add the following to the top:

require 'compass'
Compass.add_project_configuration 'config.rb'

Remove the existing compile rule for the stylesheets folder and add the following:

compile '/stylesheets/*' do
  filter :sass, Compass.sass_engine_options

Remove the existing routing rule for the stylesheets folder and add the following:

route '/stylesheets/*' do
  item.identifier.chop + '.css'

Initialise Compass

Within the compass_tutorial folder run the following command:

compass install blueprint

This will install the Blueprint CSS framework within Compass. We’ve already created the Compass configuration manually, so we don’t need to run the create command.

Modify the default layout layouts/default.html.

Remove the following line:

<link rel="stylesheet" type="text/css" href="/style.css"

And add the following:

<link href="/stylesheets/screen.css" media="screen, projection" rel="stylesheet" type="text/css" />
<link href="/stylesheets/print.css" media="print" rel="stylesheet" type="text/css" />
<!--[if lt IE 8]>
  <link href="/output/stylesheets/ie.css" media="screen, projection" rel="stylesheet" type="text/css" />

Compile and view

You can now compile your site:

nanoc3 co

Host it by running WEBrick:

nanoc3 view

And then view it using your favourite web browser by going to the following URL:


And that’s it, not much more here.

Converting my blog to use nanoc3

What is a static site generator?

Way back when I started this blog I was using Wordpress. Wordpress is cool, but too cool for me. It’s morphing into a fully blown content management system - which I don’t need for my simple blog. I like writing my blog posts in Markdown, and even though Wordpress has a plugin, it screws with the formatting, and besides - I prefer to write things in Emacs.

And poor Wordpress. None of my content is dynamic. Even the Twitter stream, which reads my latest status updates is a bit futile if I only update my Twitter status every 3 months. I was using all that processing power to generate something which was basically just HTML.

Why write my own?

It didn’t seem like a difficult thing to do, a couple of scripts to convert some Markdown into HTML, and then something to place the generated Markdown into a HAML template. It was very simple, but then I started adding things.. My quick and simple static site generator started to suffer from feature creep and whoneedswp was born.

I started adding support for Disqus, Google Analytics and syndication of Twitter feeds. Soon I’d open sourced the code so the world could see the delights of my hastily put together Ruby scripts.

However, instead of persevering with my own static site generator I’ve decided to look at some others. Mine certinally isn’t perfect, and I don’t have the time to maintain it anymore. I needed to find an alternative.

Along comes Nanoc3

After reading thechangelog I stumbled upon nanoc3 . Nanoc appears to be quite an old project, but it has all the working parts required to convert my website from using whoneedswp.

An understanding of Rules and how to break them

Nanoc3 is more advanced than whoneedswp. There is a lot more flexibility in how the site is generated. whoneedswp required content to be placed in the correct folder structure. Nanoc3 allows you to specify your own. You can even create new data sources of content. All of this is specified within a special config.yaml.

The other configuration file is Rules. This specifies what type of content these is; Nanoc3 supports more than just Markdown; what templates are used and how it is converted into HTML.

Migration of content, adding YAML headers to files

One of the features of whoneedswp was that it would scan the content of pages and look for metadata. These were lines which started with “Summary” or “Tags”. Anything following the colon on this content was used removed from the content, and was available for use by the page templates. This allowed me to display the summary of the page in an alternative location, and more interestingly allowed me to generate tag clouds for my content.

nanoc3 also allows you to embedd metadata within the content of the pages; but instead of scanning the content of the document it reads a section of YAML from the top of the document. YAML is a “straight-forward machine parsable data serialisation format”, so allows you to easily describe metadata associated with the page in a format which which is easily converted into code. Marvellous.

Tag clouds and automatically generated pages

Although there are a couple of helpers for tagging content in nanoc these only extend to listing tags associated with a specified item or returning items associated with a specified tag.

There are two main components related to tags in my site; the tag cloud and the tag page. The tag cloud lists all the tags in the site and emphasises the more popular ones. Each tag page lists all the items which are associated with the specified tag.

The Helper interface allows you to specify functions which can be used within any item template. Unfortunately, because I wanted to place the tag cloud on every page, I would have to generate the tag cloud before any of the page content was created. Fortunately within the Rules file there is provision for pre-processing.

preprocess do

The above block of code invokes the collect_all_tags function within Nanoc3::Site. I added this function by adding some custom code to the lib/ folder within my site.

collect_all_tags not only counts the number of tags and assigns each tag a weighting, but it also creates a new Nanoc3::Item for every tag. These items have no content, but have a name and title set. The items are also given the virtual path of /tags/ which by adding a simple compile directive to the Rules file allows them to use a custom template.

compile '/tags/[^\/]+/*' do
  filter :kramdown
  layout 'kind_tag'

The kind_tag template dynamically builds the content for the item based on the name tag assigned to it and the pages assigned to that tag:

    - items_with_tag(item[:name]).each do |i|
        = link_to(i[:title], i, { :class => "title" })
        %p= i[:summary]

Maven, Spring, JRuby and Gems

Spring has Dynamic Language support, which allows you to write beans in languages which can execute on the Java Virtual Machine (JVM). I’ve been experimenting with using JRuby to write beans in Ruby.

The code for all of this is on my Github profile at maven-spring-ruby.

You can find an simple example of embedding Ruby in a Spring application within the spring-helloworld Maven project.

Why am I doing this? Writing in Ruby has many advantages least of which is the support of a vast array of libraries. I hope to re-use some of the libraries within a Spring application.

My Spring application uses Maven to resolve dependencies, download the required libraries, executing any tests and packaging up the resulting JAR file. Maven has a central repository of “artifacts” which are Java libraries, packaged in JAR files with some associated meta data.

Ruby libraries are known as Gems and can be installed directly into the environment (using a gem install command), as opposed to my Java application whereby the libraries are included with the application itself.

Environment or application scope?

Java libraries consist of classes which are loaded into the class path when the application is executed. Ruby behaves similarly in that it will scan a load path for any libraries when you reference a Ruby library. The Ruby load path consists of locations in the Ruby environment, where-as the Java class path is normally list of JAR files the application uses. Maven manages the Java class path; any dependency specified in the pom.xml with the correct scope will be added automatically.

The key difference here is that the libraries for Java can be installed and loaded on a per-application basis whilst those for Ruby are usually installed on an per-environment basis.

It is possible to maintain multiple Ruby environments on the same system using tools such as RVM or Bundler and associate that Ruby environment with the application - but this could be messy. Ideally I want of my environment contained within the resulting JAR of my Maven build script. I don’t want my application to be too dependent on the system environment - I want them to be as loosely coupled as possible.

NB: When referring to a Ruby environment I’m talking about the file structure and associated executables within the RUBY_HOME path.

Maven Gems?

Maven doesn’t download Gems and won’t make them accessible to JRuby. JRuby doesn’t include any Gems other than Rake and RSpec. Creating Maven modules for every Gem I need would be painful.

Manipulating JRuby environment - bastardising jruby.home

JRuby uses the Java property jruby.home to point to it’s Ruby environment. By default when you execute JRuby this will be the location of the extracted JAR file. Taking this into account it’s therefore possible to change the jruby.home property to point to a location on the filing system of an alternative JRuby installation which already has the Gems installed.

System.setProperty('jruby.home', '/home/ogriffin/.rvm/rubies/jruby/')

This isn’t a good idea because it still relies on the the alternative JRuby environment to have the Gems installed. It also relies on that environment to be consistent across all the machines you deploy the application.

However it is the easiest solution, and you benefit from using Ruby-ish tools such as RVM.

Corrupting the JRuby artifact

An alternative is to install the JRuby gems onto the JRuby artifact, re-name it and package the new modified-JRuby JAR with your application. This is the approach which I’ve settled for. I got this idea from Nick Seiger in his post JRuby 1.16: Gems in a jar

This project is contains 3 projects:

  • jruby-gems - produces a JAR file which contains only the gems I require within the project
  • jruby-custom - packages up jruby-gems and the JRuby Maven artifact jruby-complete together
  • spring-gems - a simple Spring application which contains a Ruby script which depends on jruby-custom


This is ugly, really ugly. This module consists of a pom.xml file which executes gem install into a target folder. That target folder is packaged up as a JAR. It uses the maven-exec-plugin to run JRuby:

      <classpath />

The last two configuration elements within the configuration are the gems which need to be installed. The -i option specifies that the gems will be installed within the target/classes folder.


Slightly less ugly - but still fairly nasty. This module uses the maven-shade-plugin to generate an uber-jar containing all of the dependencies. For this module the only dependencies are JRuby and the jruby-gems module. This results in the 2 JARs being combined into an artifact which contains the gems from jruby-gems and the JRuby runtime. This customised artifact can then be included in any application we like and will have access to the Gems.


This is the main application. It has jruby-custom as a dependency along with all that is required for a Spring application. It contains one test, ContextTest which will run the jruby_hello.rb script. This script has a reference to the chronic gem installed by jruby-gems.

Install JRuby as a Maven artifact

You need to download the latest JRuby from JRuby.org and install it into your Maven repository. This allows it to be included as a Maven dependency for other projects.

wget http://jruby.org.s3.amazonaws.com/downloads/1.5.3/jruby-complete-1.5.3.jar
mvn install:install-file -Dfile=jruby-complete-1.5.3.jar -DgroupId=org.jruby -DartifactId=jruby-complete -Dversion=1.5.3 -Dpackaging=jar

Running the tests from the parent pom will not work because the artifacts to not get installed into the repository.

Titanium development on Linux

Here follows my notes on writing a simple application using Appcelerator Titanium. Appcelerator Titanium allows you to build cross-platform native applications for mobile using Javascript. Since I’m currently posing as a Javascript fan-boy I thought I’d take a look.


Installation is fairly straight forward - and there is plenty of documentation, thanks to quite a verbose getting started guide.

The steps are fairly simple:

The first time Titanium runs it will offer to install in either the your /home/ or /opt/ folder. If you decide to install in the /opt/ folder then ensure that you have the necessary permissions - Titanium just disappears if you’ve not got the correct permissions.

Once all the packages have been downloaded and installed then you’ll find that the '~/.titanium/' folder has been populated. This appears to be where Titanium stores all of it’s application code. To actually run Titanium, just re-run the executable you installed - this time when it loads you’ll find that the Titanium application appears.

g_malloc_n error

If Titanium fails to appear then check the console output. I found that I had the following error:

./Titanium Developer: symbol lookup error: /usr/lib/libgtk-x11-2.0.so.0: undefined symbol: g_malloc_n

A quick Google led me to the Titanium forums with the following solution:

rm ~/.titanium/runtime/linux/1.0.0/libgobject*
rm ~/.titanium/runtime/linux/1.0.0/libglib*
rm ~/.titanium/runtime/linux/1.0.0/libgio*
rm ~/.titanium/runtime/linux/1.0.0/libgthread*

Initial update

When Titanium first loads it dials home and checks for any software updated. If there are any software updates they will be applied automatically - but it’s worth re-starting Titanium once they are installed. When I downloaded Titanium the mobile development was installed as an update - so when I tried to load any of the mobile examples Titanium was missing some of the vital user interface components.

Getting Started

The KitchenSink

Over on GitHub you’ll find the Titanium KitchenSink. This project contains examples and tests of all the components provided by Titanium.

Clone the repository using the following command:


Once the sources have been downloaded you can load it from Titanium. In Titanium hit the “Test & Package” tab and click on “Run Emulator”. At the bottom of this window you’ll see an SDK and Screen selector. These settings decide which virtual device Titanium will attempt to execute your application on.

Hitting the “Launch” button will run the selected virtual device and install the application.

Oddly enough, this didn’t work for me first time. A virtual device would boot up, but the application would not appear. Delving into the KitchenSink directory you’ll find a build/android/bin/ folder. If the application is built successfully then you’ll see a app.apk file. When I installed this using adb install app.apk I got the an [INSTALL_FAILED_MISSING_SHARED_LIBRARY] error. This means that the virtual device being used does not have any of the Google APIs required by the application. To resolve this problem I opened the Android SDK, removed the “titanium_8_HVGA” virtual device, and created a new one ensuring the SDK requirements included the Google APIs.

Once the KitchenSink application has launched on the emulator it’s possible to see what is possible using Titanium.

User Interface

Titanium basis it’s user interface around Views and Windows. Windows contain Views and Views can contain all kinds of widgets, and possibly some move Views. You construct your interface using Javascript - so to create a button you’d use the following code:

var button1 = Titanium.UI.createButton({title: 'Button Text'});
button.addEventListener('click',function(e) {

When you create the widget you have to define a set of properties. These properties are different depending on the widget, and are all listed in the API reference documentation. Unfortunately you’re not able to manipulate all of the properties at runtime. Each widget has a number of methods for this purpose.

The addEventListener allows you to execute some Javascript when the button is pressed. One hic-up - If you’re creating a View containing a View with a number of widgets you aren’t able to listen to events on the individual widgets. You’ll have to listen to the event on the encompassing View. To identify which widget triggered the event you’ll have to use the clickName property. For example:

button = Titanium.UI.createButton({
  title: '+',
  width: 60,
  left: 240,
  top: 5, 
  bottom: 5,
  height: 34,
  clickName: 'add'

view = Titanium.UI.createTableView( { data: data , top: 5});
view.addEventListener('click', function(event) {
if (event.source.clickName === 'add') {
} else if (event.source.clickName !== 'textbox') {
    QuickList.Todo.complete(event.index, event.rowData.title);

Local Storage

Titanium also provides some APIs for SQLite3 storage on the device. The following code will create a database handler, named db and create a very simple schema.

db = Titanium.Database.install('../quicklist.db', 'quicklist');

The Titanium.Database.install method will create a database if one does not already exist.

The following code will list the contents of the DONE table and populate an array which is used to generate a TableView.

rows = db.execute('SELECT * FROM DONE');
while (rows.isValidRow()) {
  data.push({title: rows.field(0), color: '#000'});

It’s important to remember to use the close() function on the result set, otherwise an exception will be thrown when you attempt to make another query on the database.


The Titanium IDE does not give you any feedback as to why your application falls over. It’s best to launch ddms to see the LogCat. This will show you a stack trace which should include a line number reference to your Javascript.

This is obviously only the case when the error occurs in your own application. When writing my own test application I found that an error was thrown with a stack trace referencing Titanium’s classes. For example the following error occurred when inserting a new row into a table

ERROR/TiUncaughtHandler(2804): java.lang.IndexOutOfBoundsException: Invalid index 2, size is 2
ERROR/TiUncaughtHandler(2804):     at java.util.ArrayList.throwIndexOutOfBoundsException(ArrayList.java:257)
ERROR/TiUncaughtHandler(2804):     at java.util.ArrayList.get(ArrayList.java:311)
ERROR/TiUncaughtHandler(2804):     at ti.modules.titanium.ui.widget.tableview.TiTableView$TTVListAdapter.getItem(TiTableView.java:158)
ERROR/TiUncaughtHandler(2804):     at ti.modules.titanium.ui.widget.tableview.TiTableView$TTVListAdapter.isEnabled(TiTableView.java:239)

Having IndexOutOfBoundsExceptions being thrown by the framework doesn’t inspire me with confidence. In this case I had to resort to commenting out lines of code in order to identify the offender. It transpired to be this:

table1.appendRow({title: title});

Where table is a TableView. The title variable was a reference a row title which was being deleted in the following line. The solution to this problem is to create a new variable for title instead of using the previous declaration.

This isn’t a major error, and my fix doesn’t necessarily indicate that the problem lies with Titanium or not. I have concluded though that although Titanium provides you with a rapid development environment when an error occurs, you’re likely to have to resort to using the same debugging tools and methods which you’d use for native development.

It would be nice to be able to import a Titanium project into Eclipse and use the debugger available for that IDE.

Testing on a device

Unfortunately I wasn’t able to use the Titanium IDE to run my application on a device. The progress bar would appear on the user interface, but it didn’t seem to do anything.

I found that you could just install the APK files designed for the emulator on your device. Navigate to the build/android/bin folder of your project and use the ADB utility.

adb -d uninstall com.owengriffin.quicklist
adb -d install app.apk

The first command removes the application from the device, the second will install it. Replace my package name with the package name of your application.

Analysing the standard output

The Titanium IDE wouldn’t create a distributable APK for me. The progress bar appeared, but nothing happened. This was a result of not reading the documentation through completely. It’s easy to accidentally skip sections. It’s annoying when you don’t realise for hours. When something doesn’t appear to work in Titanium examine the standard output (stdout) of the application. This is easiest when you run it from a terminal window.

The Titanium IDE is split into 2 parts, a fancy user interface and a bunch of Python scripts. It’s possible to see which Python scripts are being run from the standard output. It’s not possible to see the result of these scripts - to do that you have to copy the command into a terminal window and execute it there. For example the standard output for Titanium when running the Distribute command gave me the following:

[Titanium.API] [Information] (JavaScript.KKJSList) [ "/home/ogriffin/.titanium/mobilesdk/linux/1.4.0/android/builder.py", "distribute", ""QuickList2"", ""/usr/local/share/applications/android-sdk-linux_86"", ""/home/ogriffin/workspace/QuickList2"", ""com.owengriffin.quicklist"", ""/home/ogriffin/Dropbox/keystores/android.keystore"", ""XXXX"", ""quicklist"", ""/home/ogriffin"", ""9"", ]

You can see from the above that Titanium runs the builder.py script with the options provided by the user interface. It’s worth noting here that your keystore password is sent in the standard output, this isn’t ideal from a security perspective.

So the reason why Titanium wasn’t creating a distributable APK for me? My password for the keystore was different from the key password. Once I’d set the keystore password to match the key password everything worked fine.

Of course this is mentioned in the Publishing to the Android Market guide - if you read it correctly.

8 hours later..

Appcelerator claim that on of the main advantages of using Titanium over writing native applications is the speed of development. To a certain extent this is true. Coming from a web development background it is easier to use Javascript and HTML than it is to learn a completely new language, whether it is Objective-C or Java. However using Titanium still forces the developer to test their applications on either the debugger or on the device. Although closer to representing a real environment deploying to the emulator is slow, due to the longer build process of Titanium applications.

The tool itself - the helper application which builds your Titianium - was buggy and sometime unusable. Often it would crash and require a restart. Most of the time it wasn’t immediately apparent that the application had crashed - the user interface just stopped responding. The application would be better if it showed the commands which it executes.

I’ve not yet managed to test the cross-platform abilities of Titanium - that’s next on my list. Also on my radar is PhoneGap a similar toolkit.

Logging into Ubuntu using works access card

I’ve been playing with NFC (Near Field Communication) technology for work recently, and as an aside this Friday afternoon I decided to set up my Ubuntu development box to log in automatically when I scan my work pass. Perhaps a little pointless, but it could have some uses… Anyway here are my notes.

To do this you’ll need an NFC card reader. I’m using one from Touchatag, for no other reason that it was the one provided to me by work. But to be honest, I might just go out and buy one anyway.

Fortunately somebody, somewhere has already written the software to do all of this - and all you have to do is compile, install and configure. You’ll need libnfc and pam_nfc.


You’ll need to install several packages before any of the tools will work

sudo apt-get install build-essential autoconf2.64 libtool pkgconfig libusb-dev libpcsclite-dev wget libpam0g-dev


Download libnfc from Google Code:

wget http://libnfc.googlecode.com/files/libnfc-1.3.4.tar.gz

Extract and navigate to the folder:

tar xvf libnfc-1.3.4.tar.gz
cd libnfc-1.3.4

Configure, Compile and Install!

autoreconf -vis
sudo make install
sudo ldconfig

Test the detection of any NFC devices by plugging in your card reader and placing a card on it. Running the following command should display your card:


If these instructions don’t work try the libnfc installation documentation.


pam_nfc is the module which handles the authentication. You’ll also need to download and compile this from source.

svn checkout http://nfc-tools.googlecode.com/svn/trunk/pam_nfc
cd pam_nfc
autoreconf -vis
./configure --prefix=/usr --sysconfdir=/etc --with-pam-dir=/lib/security
sudo make install

Modify /etc/pam.d/login and /etc/pam.d/gdm and add the following line:

auth         sufficient pam_nfc.so

Now you need to associate your user account with an NFC card:

sudo pam-nfc-add ogriffin

Testing browsers concurrently

Over the last few months I’ve been using Cucumber to write Behaviour-Driven-Development tests for my web projects. The tests I’ve been writing have been fairly simple and sequential. The tests would, for example, authenticate a user, perform some action and then log out. Only one user was required to perform an action at a single time. If two users were required to interact then the test would log out as the first user and authenticate with the second. The application could wait for the other user to re-appear before delivering it’s action. I’ve recently started writing some BOSH applications which require both users to be logged in at the same time. Testing these require two browsers to be open concurrently.

A very basic sequential scenario

This is an example of a simple sequential test which only requires a single browser session.

  Scenario: A message should be sent to another users
    Given I am on the index page
    When I enter a random username into "register_username"
    And I enter a random password into "register_password"
    And I enter a valid email address into "register_email"
    When I click "Register"
    Then I should see the text "Registration successful"

As you can see from above it registers a user with the web site. This is an easy scenario to programme because it only involves one user. Scenarios can become more complicated when it involves interactions with other users:

Scenario: Poke another user
  Given I have a user called Mary
  Given I have a user called Dave
  When I log in as Dave
  And I am on the messages page
  When I click "Poke" Mary
  Then I should see the text "You have poked Mary"
  Then I log out
  When I log in as Mary
  Then I should see the text "Dave has poked you"

In this scenario we are testing “Poke” functionality. You can see from above that for the user “Mary” to receive her “Poke” the test has to log out of “Dave” and re-authenticate. Ideally we would like to be able to test receiving the message immediately. This would remove this scenario’s dependency on the log out steps, which could be another scenario.

We can also identify that only lines 4,5,6,7 and 10 are related to the scenario outlined on line 1. The rest is either setting the scene or fluff required for the scenario to pass. The fluff doesn’t directly correspond to a real user, so arguably shouldn’t be there.

A more concurrent scenario

The following is an scenario written without the unnecessary log out steps.

Scenario: Poke another user
  Given there is a user called Mary
  Given there is a user called Dave
  Given "Dave" is logged in
  Given "Mary" is logged in
  Given "Dave" is on the messages page
  When "Dave" clicks "Poke" "Mary"
  Then "Dave" should see the text "You have poked Mary"
  And "Mary" should see the text "Dave has poked you"

In the above scenarios we no longer have any steps which starting with I, each step refers to a users who is performing the action. From this scenario you can see that the scene for the scenario is set up with the first 5 Given statements. This allows for the When, Then, and And statements to describe only the actions required for the scenario, not anything required for setup or the fluff described earlier.

Creating some more complex rules

We have to modify our steps to match the user’s name at the beginning:

When /^\"?(I|[^\"]*)\"? clicks? \"([^\"]*)\" \"([^\"]*)\"$/ do |who, what, to| 
  # Implementation pending

The above rule matches line 7 from the previous scenario. You can see that it involves a more complex regular expression to extract which user is undertaking the action.

A choice browsers

The Cucumber Watir examples shows us how to choose between Firewatir, Celerity and Watir’s Internet Explorer and Safari support.

  require 'firewatir'
  Browser = FireWatir::Firefox
  when /darwin/
    require 'safariwatir'
    Browser = Watir::Safari
  when /win32|mingw/
    require 'watir'
    Browser = Watir::IE
  when /java/
    require 'celerity'
    Browser = Celerity::Browser
    raise "This platform is not supported (#{PLATFORM})"

# "before all"
browser = Browser.new

Before do
  @browser = browser

# "after all"
at_exit do

The above code selects a Watir variation based on the available libraries. Notice that it creates a Browser object which implements the available Watir variation. The Browser object is then instantiated and made available during the step definitions.

To cope with multiple browsers I made some modifications to the Cucumber example above. Firstly I change the browser and @browser to be a hash of Browser instances.

browser_instance = {}
browser_port_count = 6429

Before do
  @browser_instance = browser_instance
  @browser_port_count = browser_port_count

# "after all"
at_exit do
  browser_instance.keys.each do |key|

The key for the browser_instance hash would be the user’s name provided by the step definition. To aid the look up from the step definition I wrote the following helper:

def get_browser(who)
  if not @browser_instance.has_key? who
    @browser_instance[who] = Browser.new
    @browser_port_count = @browser_port_count + 1
  return @browser_instance[who]

You can see from the above code that whenever the key does not exist in the browser_instance Hash a new Browser instance is created.

The above code will not work on all implementations of Watir. We now need to make our Cucumber tests a little more specific to the Watir variants.

So an example step definition using the code outlined above is:

Then /^\"?(I|[^\"]*)\"? should see the text "([^\"]*)"$/ do |who, what|
  if not get_browser(who).text.include? what
    fail get_browser(who).text


Celerity is an implementation of the Watir API using HTMLUnit, a headless Java-based web browser. I’ve used Celerity to work with multiple browser instances, but this also presents some problems.

Celerity will allow you to create multiple instances of the browser, but will not allow you to attach a viewer to the same port. So if you need to view your tests running you’ll need to specify a viewing port:

@browser_instance[who] = Browser.new({:viewer => "{@browser_port_count}"})

It’s worth mentioning here some other useful options to Celerity:

  • :log_level changes the amount of logging you’ll see from Celerity. I have this set to :all.
  • :javascript_exceptions will raise any Javascript errors
  • :resynchronize will resynchronize any Ajax calls. You’ll need to pause Celerity to wait for any Ajax responses. There is more information on handling Ajax requests on the Celerity Wiki.

Before running your tests you’ll now need to run a viewer on different ports. This can be done with the following command:

QT_CELERITY_VIEWER_PORT=6430 ./QtCelerityViewer &

Repeat the command incrementing the port number for every new user you want created.


Unfortunately Firewatir does not support multiple instances. Firewatir works by sending commands to the a ‘jssh’ extension installed in Firefox. jssh listens port 9997, and it is not possible to change the port number unless you install coderr’s modifications.

Once you have installed coderr’s modifications you can then create Firefox instances using the following code: @browser_instance[who] = FireWatir::Firefox.new(:port => @browser_port_count)

Another problem with Firewatir is that it does not support Firefox 3.6 on Linux, my choice of development platform. Uninstalling the Ubuntu package for Firefox I installed the generic Linux Firefox build from Mozilla.org.

Customizing JForum with Maven2

JForum is an open source forum or bulletin-board webapp written in Java. Recently was required by work to produce a customised version for deployment. The customised version would integrate with the Single Sign On solution employed, namely CAS, and would be decorated with the skin or template of the web site. In this article I describe using the Maven2 tool to apply my modifications in a separate source tree from JForum.

There are two methods of performing customisations:

  1. Modifying the existing source code and packaging a new war file. This is the simplest approach, and allows you to use the build scripts provided by the JForum download. Combining the source code of JForum and your modifications makes it harder to apply bug fixes later on, and makes it hard to identify your customisations.
  2. Use Maven2 to overlay your customisations over the existing WAR file distributed by JForum.

The advantages of the second approach are:

  • It keeps your modifications separate
  • Allows you to upgrade JForum easily
  • It integrates well with existing build tools

Install JForum to your local repository

$ mvn install:install-file -DartifactId=jforum -Dversion=2.1.8 -Dfile=jforum-2.1.8.war -DgroupId=net.jforum -DgeneratePom=true -Dpackaging=war

Create a basic pom.xml for the JForum customisations

Use the webapp archetype create a basic Maven project which will contain your customisations.

$ mvn archetype:generate -DarchetypeArtifactId=maven-archetype-webapp -DgroupId=com.company.jforum -DartifactId=jforum -Dversion=2.1.8

Unless you envisage customising the web.xml, delete the auto-generated version.

$ rm  src/main/webapp/WEB-INF/web.xml

Remove the auto-generated index.jsp. The equivalent of this file within the JForum war redirects to the list page. There shouldn’t be any need to customise this file.

$ rm src/main/webapp/index.jsp

It’s worth using some version control system to manage the changes to this project:

$ git init
$ echo "target" > .gitignore
$ git add pom.xml src .gitignore
$ git commit -m "Initial code import"

Because this is Maven project I’m excluding the target folder from any commits.

Dependencies and WAR overlay

Remove the JUnit dependency created by the archetype. Our project won’t have any tests, so it’s unnecessary.

Add the JForum artifact we added to our local repository as a dependency:


Add a configuration to the maven-war-plugin to overlay the JForum WAR.

Ensure that a jforum.war is generated within the target folder when the mvn package command is run.

Applying customisations

The majority of JForum customisations will be modifications to the .properties files. Currently these are packaged within the JForum war file. To apply our customisations we create a duplicate of each .properties and place them within the src/main/resources folder.

Database configuration

When we ran the mvn package command Maven extracted the war file into the target folder. We can copy resources from this location into the source folder. The following creates a customised mysql.properties file.

$ mkdir -p src/main/webapp/WEB-INF/config/database/mysql/
$ cp target/jforum/WEB-INF/config/database/mysql/mysql.properties src/main/webapp/WEB-INF/config/database/mysql/

Once running the above make some changes to mysql.properties and ensure they are packaged within the war.

$ nano src/main/webapp/WEB-INF/config/database/mysql.properties
$ mvn clean package
$ # Extract the mysql.properties from the war file
$ unzip target/jforum.war /WEB-INF/config/database/mysql.properties
$ # Check that modified properties appear within the extracted file
$ nano WEB-INF/config/database/mysql.properties
$ # Clean up
$ rm -rf WEB-INF

You need to extract and install the MySQL schema onto your chosen database before deploying.

Skins and visual changes

Copy the default skin from the war file and place it within the src/main/webapp folder.

$ mkdir -p src/main/webapp/templates
$ cp target/war/work/net.jforum/jforum/templates/default src/main/webapp/templates/default

You can then modify any of the files within the templates/default directory to alter the skin.

Once you have made all your modifications it’s a good idea to diff the templates/default folder in src with the one in target and remove any files in src which are unchanged. By doing this you are reducing the amount of code changes between your customisations and the vanilla JForum. This will make it easier to apply any bug-fixes to your customisations.

Jetty configuration

The jforum.war file generated should now be deployable onto a Java application server. To test my changes I’ve just deployed it onto Tomcat6.

For development however I want to start using the Jetty Maven plugin. This will allow me to run the forum quickly.

Modify the pom.xml and add the following configuration to the __ section:


Now when you run the following command Jetty will be run and will deploy the war file.

$ mvn jetty:run-war    

Making modifications to the Java source

You may need to make some modifications to the Java source code of JForum. This may be required if you want to implement a custom Single Sign On (SSO) provider.

Add the following target to the Ant build.xml of JForum:

<target name="jar" depends="compile">
    <jar destfile="${build.dir}/${filename}.jar" basedir="${classes.dir}"/>

Run the following command to build the sources:

$ ant clean jar

This will create a JAR file within jforum/build/. This JAR will need to be imported to the Maven repository.

mvn install:install-file -Dpackaging=jar -DartifactId=jforum -Dversion=2.1.8 -Dfile=jforum-2.1.8.jar -DgroupId=net.jforum -DgeneratePom=true -Dpackaging=jar

The JAR can now be added as a dependency to our JForum pom.xml. You’ll also need to add the dependencies required your code modifications to the pom.xml.

Project history in Netbeans

I found that my Netbeans IDE was not opening my Maven projects because it had some previous project metadata associated with it. This is just a quick hint on how to clear the project history in the Netbeans IDE. This has only been tested on version 6.8.

  • Quit Netbeans
  • Run the following command:

    rm ~/.netbeans/6.8/config/Preferences/org/netbeans/modules/projectui.properties

  • Restart Netbeans

You can be more particular about modifying this file as described on the Netbeans forums.