Logging tests to separate files

We have been using TestNG to run our tests for a while now. When we initially started, our code was written using Log4J and we were able to configure it to log all the tests to a single file. This made the logs very unwieldy and looking for issues very difficult. Looking at 100+ MB of logs isn’t an easy task. Changing this to rolling files only made the matters worse. So, I started looking for ways to log each test case into a separate file (meaning a test named foo, would be logged to foo.log).

My first attempt was to do this using log4j itself. Log4J doesn’t provide an easy way of doing this. It’s possible if you follow certain conventions in declaring your loggers and how you use MDC, but I wasn’t able to get it working with all my code. On looking a bit further, I found Logback. And SiftingAppender in Logback is just what I needed.

So, using slf4j, logback, MDC and a few simple coding conventions, I was able to get per test log files working. Here’s what you need to do:

  1. Start using SLF4J for your logging. If you are using Log4J, moving from L0g4J to SLF4J is pretty simple
  2. Write functions to set MDC at start of a test and unset it at the end of a test
  3. Update the test cases, such that the MDC set/unset functions are invoked
  4. Configure logback.xml to use the SiftingAppender

I’ll expand a bit more on the above.

Using SLF4J

Starting to use SLF4J or moving to SLF4J from other logging frameworks is pretty straightforward. I didn’t bother using the java application at http://www.slf4j.org/migrator.html and found it much easier to just to regex to do the job. There were some places where log.xxx(object) had to be changed to log.xxx(object.toString()) but that wasn’t a whole lot of pain.

Code to set/unset MDC

Logback’s documentation on MDC is very extensive and explains the concept quite clearly. Essentially using MDC would allow us to share a key/value across a thread hierarchy. Using MDC is as simple as putting a key/value into a Map. This value will be accessible anytime in the thread and any children of that thread. At the beginning of the test, we’ll put <"testname", $test_case_name> as the key/value into the map. At the end of the test, we’ll remove this key. This MDC value would be used by the sifting appender to create logs at runtime.  I wrote a simple class to encapsulate this functionality:

package com.nm.examples;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;
 * Class to handle setting/removing MDC on per test case basis. This helps
 * us log each test case into it's own log file. Please see
 * {@link http://logback.qos.ch/manual/appenders.html#SiftingAppender}
 * and {@link http://logback.qos.ch/manual/mdc.html}
 * @author nullin
public class TestLogHelper
  private static final Logger log = LoggerFactory.getLogger(TestLogHelper.class);
  public static final String TEST_NAME = "testname";
   * Adds the test name to MDC so that sift appender can use it and log the new
   * log events to a different file
   * @param name name of the new log file
   * @throws Exception
  public static void startTestLogging(String name) throws Exception {
    MDC.put(TEST_NAME, name);
   * Removes the key (log file name) from MDC
   * @return name of the log file, if one existed in MDC
  public static String stopTestLogging() {
    String name = MDC.get(TEST_NAME);
    return name;

Update test cases

Now, you just need to make sure that you invoke TestLogHelper.startTestLogging(String testname) as early as possible during the test execution. Eventually, by the end of the test you should invoke TestLogHelper.stopTestLogging() to ensure that no extra logs get logged into this log file.

As we are using TestNG, for us it was a very simple matter of creating a method annotated @BeforeClass or @BeforeMethod as per requirements and put this code in there. For example, I use the following two methods in the base class for all our tests:

   public void testSetUp() throws Exception {
      //start logging to test specific log file
      //Do some setup specific stuff here
   public void testCleanUp() throws Exception {
      try {
         //Do some cleanup specific stuff here
      } finally {
         //stop test logging to test specific file
      return true;
   public String getTestId() {
      return ((testId == null) ? (this.getClass().getName()) : testId);

Configure logback.xml with SiftingAppender

The last part is to configure the logback.xml to use a SiftingAppender. You can look at the documentation for SiftingAppender to see the different examples. Following is a snippet of the configuration that I am using:

   <appender name="RootSiftAppender" class="ch.qos.logback.classic.sift.SiftingAppender">
         <appender name="FILE-${testname}" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <layout class="ch.qos.logback.classic.PatternLayout">
               <Pattern>%d{ISO8601} %-5level %C{1} [%M:%L] [%thread] - %msg%n</Pattern>

This configuration says that the test cases would specify the MDC value using a key “testname” and if logback finds no such key, it’ll just log the statements into testrun.log.

And that’s it. You should be good to go.

Hudson CI for Testing

We are heavily using Hudson for testing a product we are working on. Other than maintenance and some instability issues, it’s been a great experience working with Hudson. We started out with just a single machine running Hudson server and then soon moved to a master-slave configuration where we have setup one master connected to four slaves. We are currently running about 10-15 jobs on this setup at one point or another during the day and use it to run pre-checkins (well in a way *post*-checkins), BATs and regression runs.

Here are a couple of screenshots that I took  when we were just getting started and were stabilizing Hudson and the product.

We have the following kinds of jobs running on this system:

  1. Pre-checkins: We provide development team with a set of tests that they are supposed to run before they commit any code. To ensure that the development team was doing this, we have a job that runs pre-checkins tests on every commit. This helps us ensure that things remain green though out the day and we come to know of failures asap.
  2. BATs: Basic Acceptance Tests as another category of tests that help us ascertain the daily quality of builds. Before the QA picks up a build for testing, we run the BATs against the daily official build and make sure that all tests pass.
  3. Regression runs: We also use hudson to run regression tests. At time of taking this screenshots, we had only about a 1000 tests and hence had a single job which would run all the tests and would take about 6-7 hours to complete. Now, we have split up that job into multiple jobs by using multiple build stages that Hudson supports. I will talk more about this in a later post.
  4. Helper jobs: We have parameterized jobs setup that help any one (dev or QA) to start any kind of test run against an official or sandbox build.
  5. CI jobs: This is how Hudson was originally intended to be used. For each branch that we work on, we poll the SCM every 5 minutes to check for checkins and build the dev and QA code. This helps us ensure that the code any checkin hasn’t broken the QA build and that users have access to the latest artifacts/distributions at all times.

For the project, we had invested time into providing infrastructure to pick up official/sandbox builds and then using these builds to setup environment, configure tests and then execute the tests. We just had to put in some more time to change or add features to make it all work easier with Hudson. Now, instead of letting everyone fend for themselves when it comes to setting up environments for testing and then reporting the results, we use Hudson to automate that process to a very large extent and make these tasks more manageable and the product quality more visible.

Connect to MS SQL Server using JDBC

The following example shows a java class that can be used to verify if you connection to MS SQL Server using JDBC is working properly. The following code establishes a connection to MS SQL server and then executes a query to print system tables. You need to get Microsoft’s JDBC drivers or jTDS JDBC drivers.

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
public class ConnectMSSQLServer
   public void dbConnect(String db_connect_string,
            String db_userid,
            String db_password)
      try {
         Connection conn = DriverManager.getConnection(db_connect_string,
                  db_userid, db_password);
         Statement statement = conn.createStatement();
         String queryString = "select * from sysobjects where type='u'";
         ResultSet rs = statement.executeQuery(queryString);
         while (rs.next()) {
      } catch (Exception e) {
   public static void main(String[] args)
      ConnectMSSQLServer connServer = new ConnectMSSQLServer();
      connServer.dbConnect("jdbc:sqlserver://&lt;hostname&gt;", "&lt;user&gt;",

Change <hostname>, <user> and <password> to the correct values. Also, if you use the jTDS drivers, the driver class should be set to net.sourceforge.jtds.jdbc.Driver.