Category Archives: code

HemingwayEx v1.8 released

I played around a bit with WP 3.0 Menus and was able to incorporate it into HemingwayEx. Also, with this theme, users can decide to have a single column homepage instead of have the two columns.

Setting up WP Menus was quite straightforward actually, based on information in Nicholas’s article. Then there was some work needed to show drop down menus and for that I chose to use the Superfish JQuery Plugin. The site’s got everything you need to get the Superfish menus working. I just needed to make a few tweaks to the arrow images to change the color from white to black and had to fix the CSS.

One thing that caused me considerable amount of headache was invoking the superfish js using jQuery when document loads. Because other parts of the theme were also using jQuery and other JavaScripts, I had to use the JQuery NoConfict mode. I had to change the following:

<script  type="text/javascript"> 
    $(document).ready(function(){ 
        $("ul.sf-menu").superfish(); 
    }); 
</script>

to

<script  type="text/javascript">  
    $j = jQuery.noConflict();
    $j(document).ready(function() { 
        $j('ul.sf-menu').superfish(); 
    });
</script>

And then everything worked perfectly. Hope you enjoy this release.

I have started hosting HemingwayEx code and downloads on Github now. v1.8 is available for download right now.

Logging tests to separate files

We have been using TestNG to run our tests for a while now. When we initially started, our code was written using Log4J and we were able to configure it to log all the tests to a single file. This made the logs very unwieldy and looking for issues very difficult. Looking at 100+ MB of logs isn’t an easy task. Changing this to rolling files only made the matters worse. So, I started looking for ways to log each test case into a separate file (meaning a test named foo, would be logged to foo.log).

My first attempt was to do this using log4j itself. Log4J doesn’t provide an easy way of doing this. It’s possible if you follow certain conventions in declaring your loggers and how you use MDC, but I wasn’t able to get it working with all my code. On looking a bit further, I found Logback. And SiftingAppender in Logback is just what I needed.

So, using slf4j, logback, MDC and a few simple coding conventions, I was able to get per test log files working. Here’s what you need to do:

  1. Start using SLF4J for your logging. If you are using Log4J, moving from L0g4J to SLF4J is pretty simple
  2. Write functions to set MDC at start of a test and unset it at the end of a test
  3. Update the test cases, such that the MDC set/unset functions are invoked
  4. Configure logback.xml to use the SiftingAppender

I’ll expand a bit more on the above.

Using SLF4J

Starting to use SLF4J or moving to SLF4J from other logging frameworks is pretty straightforward. I didn’t bother using the java application at http://www.slf4j.org/migrator.html and found it much easier to just to regex to do the job. There were some places where log.xxx(object) had to be changed to log.xxx(object.toString()) but that wasn’t a whole lot of pain.

Code to set/unset MDC

Logback’s documentation on MDC is very extensive and explains the concept quite clearly. Essentially using MDC would allow us to share a key/value across a thread hierarchy. Using MDC is as simple as putting a key/value into a Map. This value will be accessible anytime in the thread and any children of that thread. At the beginning of the test, we’ll put <"testname", $test_case_name> as the key/value into the map. At the end of the test, we’ll remove this key. This MDC value would be used by the sifting appender to create logs at runtime.  I wrote a simple class to encapsulate this functionality:

package com.nm.examples;
 
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;
 
/**
 * Class to handle setting/removing MDC on per test case basis. This helps
 * us log each test case into it's own log file. Please see
 * {@link http://logback.qos.ch/manual/appenders.html#SiftingAppender}
 * and {@link http://logback.qos.ch/manual/mdc.html}
 * @author nullin
 */
public class TestLogHelper
{
  private static final Logger log = LoggerFactory.getLogger(TestLogHelper.class);
  public static final String TEST_NAME = "testname";
 
  /**
   * Adds the test name to MDC so that sift appender can use it and log the new
   * log events to a different file
   * @param name name of the new log file
   * @throws Exception
   */
  public static void startTestLogging(String name) throws Exception {
    MDC.put(TEST_NAME, name);
  }
 
  /**
   * Removes the key (log file name) from MDC
   * @return name of the log file, if one existed in MDC
   */
  public static String stopTestLogging() {
    String name = MDC.get(TEST_NAME);
    MDC.remove(TEST_NAME);
    return name;
  }
}

Update test cases

Now, you just need to make sure that you invoke TestLogHelper.startTestLogging(String testname) as early as possible during the test execution. Eventually, by the end of the test you should invoke TestLogHelper.stopTestLogging() to ensure that no extra logs get logged into this log file.

As we are using TestNG, for us it was a very simple matter of creating a method annotated @BeforeClass or @BeforeMethod as per requirements and put this code in there. For example, I use the following two methods in the base class for all our tests:

   @BeforeClass
   public void testSetUp() throws Exception {
      //start logging to test specific log file
      TestLogHelper.startTestLogging(getTestId());
 
      //
      //Do some setup specific stuff here
      //
   }
 
   @AfterClass
   public void testCleanUp() throws Exception {
      try {
         //
         //Do some cleanup specific stuff here
         //
      } finally {
         //stop test logging to test specific file
         TestLogHelper.stopTestLogging();
      }
      return true;
   }
 
   public String getTestId() {
      return ((testId == null) ? (this.getClass().getName()) : testId);
   }

Configure logback.xml with SiftingAppender

The last part is to configure the logback.xml to use a SiftingAppender. You can look at the documentation for SiftingAppender to see the different examples. Following is a snippet of the configuration that I am using:

   <appender name="RootSiftAppender" class="ch.qos.logback.classic.sift.SiftingAppender">
      <discriminator>
         <Key><strong>testname</strong></Key>
         <DefaultValue><strong>testrun</strong></DefaultValue>
      </discriminator>
      <sift>
         <appender name="FILE-${testname}" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <File>${testname}.log</File>
            <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
               <FileNamePattern><strong>${testname}.%i.log</strong></FileNamePattern>
               <MinIndex>1</MinIndex>
               <MaxIndex>100</MaxIndex>
            </rollingPolicy>
            <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
               <MaxFileSize>5MB</MaxFileSize>
            </triggeringPolicy>
            <layout class="ch.qos.logback.classic.PatternLayout">
               <Pattern>%d{ISO8601} %-5level %C{1} [%M:%L] [%thread] - %msg%n</Pattern>
            </layout>
         </appender>
      </sift>
   </appender>

This configuration says that the test cases would specify the MDC value using a key “testname” and if logback finds no such key, it’ll just log the statements into testrun.log.

And that’s it. You should be good to go.

Hudson CI for Testing

We are heavily using Hudson for testing a product we are working on. Other than maintenance and some instability issues, it’s been a great experience working with Hudson. We started out with just a single machine running Hudson server and then soon moved to a master-slave configuration where we have setup one master connected to four slaves. We are currently running about 10-15 jobs on this setup at one point or another during the day and use it to run pre-checkins (well in a way *post*-checkins), BATs and regression runs.

Here are a couple of screenshots that I took  when we were just getting started and were stabilizing Hudson and the product.

We have the following kinds of jobs running on this system:

  1. Pre-checkins: We provide development team with a set of tests that they are supposed to run before they commit any code. To ensure that the development team was doing this, we have a job that runs pre-checkins tests on every commit. This helps us ensure that things remain green though out the day and we come to know of failures asap.
  2. BATs: Basic Acceptance Tests as another category of tests that help us ascertain the daily quality of builds. Before the QA picks up a build for testing, we run the BATs against the daily official build and make sure that all tests pass.
  3. Regression runs: We also use hudson to run regression tests. At time of taking this screenshots, we had only about a 1000 tests and hence had a single job which would run all the tests and would take about 6-7 hours to complete. Now, we have split up that job into multiple jobs by using multiple build stages that Hudson supports. I will talk more about this in a later post.
  4. Helper jobs: We have parameterized jobs setup that help any one (dev or QA) to start any kind of test run against an official or sandbox build.
  5. CI jobs: This is how Hudson was originally intended to be used. For each branch that we work on, we poll the SCM every 5 minutes to check for checkins and build the dev and QA code. This helps us ensure that the code any checkin hasn’t broken the QA build and that users have access to the latest artifacts/distributions at all times.

For the project, we had invested time into providing infrastructure to pick up official/sandbox builds and then using these builds to setup environment, configure tests and then execute the tests. We just had to put in some more time to change or add features to make it all work easier with Hudson. Now, instead of letting everyone fend for themselves when it comes to setting up environments for testing and then reporting the results, we use Hudson to automate that process to a very large extent and make these tasks more manageable and the product quality more visible.

Running single test or class using TestNG and Ant

I have been using TestNG for a while now in the latest project that I am working on. We have developed a whole suite of tests for an application and TestNG has really served us well.

When it comes to running tests, we find that people (developer, testers and others) usually are hesitant to use something that involves much learning or is different from how they are used to doing things currently. So, we found it very useful to provide utilities that made it easy for anyone to be able to pick up test artifacts and with-in a few steps be able to run tests to reproduce bugs. Because we are building our project using ant, it was easy to provide a build.xml with the test distribution that will able to run the TestNG tests. This keeps things simple and uniform.

The TestNG ant task uses TestNG suites defined in XML files to run tests. It doesn’t provide an easy way to run a single test method or all the test methods in a single class. Basically, anytime someone has to run a test using this task, they need to create an XML file specifying the class and method for that test in an XML file and then execute the ant task. This can be simplified by letting ant targets take care of setting up the XML file with the require test and then executing it.

It’s quite simple. You just need to create a TestNG suite XML template, modify it with the parameters that the user passes in on command line and then execute the TestNG ant task. Here’s how this can be achieved. First, create a template XML file like the following at ${testng.templates}/testfn.xml:

< !DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="Single Method Suite">
   <test name="Single Method Test">
      <classes>
         <class name="@CLASS@">
            <methods>
               <exclude name=".*" />
               <include name="@TEST@" />
            </methods>
         </class>
      </classes>
   </test>
</suite>

Now, you can add the following target to the build file:

<target name="run-single-test" 
         description="run a specific test. Requires class.name property set to fully qualified name of test class and test.name property set to method name">
   <condition property="propsSpecified">
      <and>
         <isset property="class.name" />
         <isset property="test.name" />
      </and>
   </condition>
   <fail unless="propsSpecified" 
            message="class.name and/or test.name property not specified."/>
   <copy todir="${tmp.dir}" file="${testng.templates}/testfn.xml" overwrite="true">
      <filterset>
         <filter token="CLASS" value="${class.name}"/>
         <filter token="TEST" value="${test.name}"/>
      </filterset>
   </copy>
   <testng classpathref="lib.path"
           outputDir="${results.dir}/${DSTAMP}.${TSTAMP}-single-test-${class.name}-${test.name}"
           workingDir="${results.dir}/${DSTAMP}.${TSTAMP}-single-test-${class.name}-${test.name}"
           verbose="2"
           useDefaultListeners="false"
           listeners="${testng.listeners}">
      <xmlfileset dir="${tmp.dir}" includes="testfn.xml" />
      <jvmarg value="-Xmx1024m"/>
   </testng>
</target>

The target picks the testfn.xml file, replaces the tokens with the specified input and copies it to the temp location. Now, the TestNG task can use this updated XML file to run tests. This target is invoked as:

ant run-single-test -Dclass.name=com.nalinmakar.testng.ant.Demo -Dtest.name=Test1

for the following class:

package com.nalinmakar.testng.ant;
public class Demo
{
   @Test
   public void Test1()
   {
      //do something
   }
   @Test
   public void Test2()
   {
      //do something
   }
}

Similarly, you can also create a template and another ant target for running all the test methods in a class:

< !DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="Single Class Suite">
   <test name="Class Test">
      <classes>
         <class name="@CLASS@"  />
       </classes>
   </test>
</suite>
<target name="run-class"
        description="run all methods in a specific test class. Requires class.name property to be set to fully qualified name of class">
   <condition property="classNameSpecified">
      <isset property="class.name" />
   </condition>
   <fail unless="classNameSpecified"
         message="class.name property not specified. Don't know which test class to run."/>
   <copy todir="${tmp.dir}" file="${testng.templates}/class.xml" overwrite="true">
      <filterset>
         <filter token="CLASS" value="${class.name}"/>
      </filterset>
   </copy>
   <testng classpathref="lib.path"
           outputDir="${results.dir}/${DSTAMP}.${TSTAMP}-class"
           workingDir="${results.dir}/${DSTAMP}.${TSTAMP}-class"
           verbose="2"
           useDefaultListeners="false"
           listeners="${testng.listeners}">
      <jvmarg value="-Xmx1024m"/>
      <xmlfileset dir="${tmp.dir}" includes="class.xml" />
   </testng>
</target>

and can invoke this as

ant run-single-test -Dclass.name=com.nalinmakar.testng.ant.Demo