Tag Archives: xUnit

Droid Units

JUnit testing of model and service classes under Android/Dalvik is straightforward, but when it comes to testing UIs (activities and fragments), the basic support is just too low-level. Fortunately, Robotium (think Selenium for droids) provides a most-excellent adjunct for this.

Yet Robotium should be used with care to create maintainable tests that aren’t brittle. To that end, I’ve developed some UI automation practices:

  • Write true unit test classes that cover only a single activity or fragment. Of course, many UI actions will take you to the next activity (and you should assert that), but leave multi-activity testing for separate integration tests.
  • Stub back-end behaviors using standard mock and dependency injection techniques. For my current app, I kept it simple/lightweight and wrote my own code, but I’ve also used Mockito (with dexmaker) and Dagger (with javax.inject); these are nice Android-compatible frameworks.
  • Rather than repeating the same raw Robotium calls directly from your tests, wrap them in descriptive helper methods like enterUserID(String id), enterPassword(String password), clickLoginButton(), etc. This DRY approach makes for more readable tests and simplifies updates when your UI changes.
  • Since you probably use common superclasses for your activities and fragments, also create parent test case classes to factor common testing behaviors. See below for snippets from one of mine.

I haven’t found a good tool for measuring code coverage for apps (Emma under Android is flakely), so I’d love to hear your recommendations.

public abstract class BaseActivityTest<T extends Activity> 
                             extends ActivityInstrumentationTestCase2<T> {
 
	protected Solo mSolo;	
 
	public BaseActivityTest(Class<T> activityClass) {
		super(activityClass);
	}
 
	protected void setUp() throws Exception {
		// Reset your DI container and common mocks here...		
		mSolo = new Solo(getInstrumentation(), getActivity());
	}
 
	protected void tearDown() throws Exception {
		mSolo.finishOpenedActivities();
	}
 
	public void testLayoutPortrait() {
		mSolo.setActivityOrientation(Solo.PORTRAIT);
		verifyViews(getLayoutViews());		
		verifyViews(getPortraitViews());
	}
 
	public void testLayoutLandscape() {
		mSolo.setActivityOrientation(Solo.LANDSCAPE);
		verifyViews(getLayoutViews());		
		verifyViews(getLandscapeViews());		
	}	
 
	// ...
}

Mocking J

Although I’m a perennial test-driven development (TDD) wonk, I’ve been surprised by the recent interest in my xUnits, many of which are so pedestrian I’ve completely forgotten about them.  After all, once the code is written and shipped, you can often ignore unit tests as long as they pass on builds and you aren’t updating the code under test (refactoring, extending, whatever).  Along with that interest has come discussions of mock frameworks.

Careful… heavy reliance on mocks can encourage bad practice.  Classes under test should be so cohesive and decoupled they can be tested independently with little scaffolding.  And heavy use of JUnit for integration tests is a misuse of the framework.

But we all do it.  You’re working on those top service-layer classes and you want the benefits of TDD there, too.  They use tons of external resources (databases, web services, files, etc.) that just aren’t there in the test runner’s isolated environment.  So you mock it up, and you want the mocks to be good enough to be meaningful.  Mocks can be fragile over time, so you should also provide a way out if the mocks fail but the class under test is fine.  You don’t want future maintainers wasting time propping up old mocks.

So how to balance all that? Here’s a quick example to illustrate a few techniques.

public class MyServiceTest {
	private static Log logger = LogFactory.getLog(MyServiceTest.class);
	private MyService myService = new MyService();	                 // #1
	private static boolean isDatabaseAvailable = false;
 
	@BeforeClass
	public static void oneTimeSetUp() throws NamingException   {
		// Set up the mock JNDI context with a data source.	
		DataSource ds = getDataSource(); 
		if (ds != null) {                                        // #2
			isDatabaseAvailable = true;
			SimpleNamingContextBuilder builder = new SimpleNamingContextBuilder();
			builder.bind("jdbc/MY_DATA_SOURCE", ds);
			builder.activate();
		}
	}	
 
	@Before
	public void setUp() {
		// Ignore tests here if no database connection
		Assume.assumeTrue(isDatabaseAvailable);                 // #3
	}
 
	@Test
	public void testMyServiceMethod() throws Exception {
		String result = myService.myServiceMethod("whatever");
		logger.trace("myServiceMethod result: " + result);      // #4
		assertNotNull("myServiceMethod failed, result is null", result);
		// Other asserts here...
	}
}

Let’s take it from top to bottom (item numbers correspond to // #x comments in the code):

  1. Don’t drag in more than you need.  If you’re using Spring, you may be tempted to inject (@Autowire) the service, but since you’re testing your implementation of the service, why would you?  Just instantiate the thing. There are times when you’ll want a Spring application context and, for those, tools like @RunWith(SpringJUnit4ClassRunner.class) come in handy.  But those are rare, and it’s best to keep it simple.
  2.  

  3. Container? forget it!  Since you’re running out of container, you will need to mock anything that relies on things like JNDI lookups.  Spring Mock’s SimpleNamingContextBuilder does the job nicely.
  4.  

  5. Provide a way out.  Often you can construct or mock the database content entirely within the JUnit using in-memory databases like HSQLDB.  But integration test cases sometimes need an established database environment to connect to.  Those cases won’t apply if the environment isn’t there, so use JUnit Assume to skip them.
  6.  

  7. Include traces.  JUnits on business methods rarely need logging, but traces can be valuable for integration tests.  I recommend keeping the level low (like debug or trace) to make them easy to throttle in build logs.

Frameworks like JMockit make it easy to completely stub out dependent classes.  But with these, avoid using so much scaffolding that your tests are meaningless or your classes are too tightly coupled.

Just a few suggestions to make integration tests in JUnits a bit more helpful.

Find a Bug, Code an xUnit

In today’s travels, I ran into an important bug in a new feature that had been missed in test.  One reason for the late discovery was that setting up and re-creating the problem in “black box” mode was time-consuming: change profiles, load and decision new work, complete a full processing cycle, create end of day files, and so on.  Yet at its core, it boiled down to only four possible permutations of inputs through a single business object method, and one of these four behaved badly.

This was, of course, the perfect opportunity for an xUnit: a short method could test all four permutations and prove the bug.  So, in classic TDD fashion, I coded the test case first and then fixed the business object to turn yellow to green.  This bug had escaped not because creating a SUnit was difficult, but because it had been overlooked.

In any system, there are lots of test scenarios (such as deep integration) that cannot be approached by xUnits.  But there are usually many more valuable xUnits that could be written than actually are.  Like refactoring, there’s never time to go back and add xUnits to old code just to do it.  But bug discovery is the perfect opportunity, and xUnits here are not only good policy, they help with coding and verifying the fix.  So, find a bug, code an xUnit.

Contrast

I happened to notice that an SUnit I added today was number 1,000.  Perhaps I should get free groceries for that milestone, but it reminded me just how much and why we take xUnits and test-driven-development (TDD) for granted.  I was a bit more aware, having spent yesterday working on a customer’s C++ code which had no xUnits or test scaffolding.

It really was a study in contrasts.

For the customer’s C++ code, I found myself fighting in the Visual Studio debugger mostly with overzealous constructors and destructors (folks, just instantiate and delete objects in constructors/destructors, don’t use them for wild stuff like reading registries, calling decryption libraries, and connecting and disconnecting from databases).  But the real hassle was having to run the code-compile-link-bounce-deploy gauntlet far too many times.  Often this isn’t bad (yes, incremental compile and auto build all help), but in this case, it took a lot set-up time getting data in the state it needed to be in before calling this code, and every code change required repeating that.  That’s usually true even of hot swap JVMs.

Compare that to my SUnit-driven VA Smalltalk code.  I wrote the test case and my first stub method before I wrote the code.  I ran the test, and of course, it failed (failed big, too: red, not yellow, due to an expected message not understood exception).  I re-ran the SUnit with Debug, and started filling in the code – in the debugger.  I added methods and even an entire class that didn’t exist before I started debugging.  I inspected objects and ivars and even changed a few to guide me through crafting and whittling at the code.  I ran it again and got to yellow.  One more “code it while debugging it” pass and the test went green.

Red, then yellow, then green.  My favorite development process.

I’ll soon upgrade my Visual Studio 2008 to 2010 and look forward to the new features it offers (thank you, Microsoft for finally bundling ASP .NET MVC, and for abandoning your own JavaScript kluges and just adopting jQuery).  But, decades later, it’s still nowhere near as productive as VA Smalltalk, VisualWorks, and Pharo, where you can write the code in the debugger while the system is still running and never have to stop it.

Why haven’t we learned?

Test the Testing

One of many benefits of test-driven development (TDD) and automated unit and regression testing is that it provides additional defect-detection safety nets.  Development teams catch more of their own defects before delivery to test.  In my own recent experience (and that of others I’ve talked with), this is very valuable and very timely.

In so many shops, recent cost cutting and offshoring has diminished the quality of component and system testing.  Often this decline is staff related: reduced tester availability and increased turnover, even within offshore test groups.  Defect escapes are costly and embarrassing.  So what’s a developer to do?

We can borrow one groovy solution from the Three Dog Night days.  Those dusty pages of the 70s Harlan Mills papers describe defect seeding: deliberately adding defects to monitor defect discovery rates and coverage.  Also known as bebugging (with a “b”), defect seeding objectively measures the quality of the testing.  In short, it tests the testing.

Deliberately adding bugs seems to go against the grain: it’s a destructive activity, as is test itself.  Yet, properly measured and managed, it can be an important part of overall software quality engineering.  If a test team misses known, well-placed bugs, this is identified and corrective action can be taken.  That’s much better than shooting in the dark.  It can shed much-needed light when the test team is in a different location and direct oversight isn’t possible.

Much of that 70s literature on bebugging was wrapped up with somewhat extreme, academic concepts – like limited (statistical) testing and the idea that tools could automatically inject and remove defects or faults.  Much simpler approaches are possible; such as:

  1. Developers deliberately code important but non-fatal defects in new code they write, while at the same time writing xUnit test cases that will fail because of the injected defects.  The tools themselves provide the metrics, and this avoids the risk of forgetting about some seeded bugs when it’s time to remove them after the test iteration.
  2. After releasing a sprint or build to test, the development team continues to actively test the product themselves.  Any defects found by the developers are privately recorded and tracked to determine if and when the test team finds them.

Simple bebugging practices such as these and others can, over time, improve test quality and coverage.  And you’ll have real metrics to see how well Test passes The Test.