Posts Tagged ‘Fit’

Negative Testing with Fit

December 13, 2007

Someone asked me how you do negative testing in Fit.  People generally refer to tests that are supposed to cause errors as negative tests.  The first thing is to get the positive test cases working, those that should not produce any errors.  If the positive cases are not passing, you cannot depend on any results.  Though some negative tests cases may be producing correct results, if positive test cases are not passing, you cannot depend on the results of negative test cases (those that should cause errors).  If the developers have to change code to get the positive tests working, they are likely to make a change that will affect the negative test cases.  All specified functions should have positive tests.  Get the positive test cases working before concentrating on getting the negative test cases working.

The developers in our group came up with the following simple fixture to deal with errors that generate error messages.  Note this fixture is designed around the idea that when there is an error, an error message is generated.  You include the fixture and put the following table in your test if you are not expecting an error message.  This is something you would put in your positive tests.

ErrorMessage
Messages

When you run your test, if there are no errors, you get the table with the two gray rows.  If on the other hand you get an error you are not expecting, it adds a new row to the table with the error message and the indication that this is surplus, i.e. something you were not expecting.  Since this is an error you are not expecting, it colors the row red.

ErrorMessage
Messages
Something went wrong. surplus

Now lets say that you were creating a negative test (one in which you were expecting an error) where you expected the error message “Something went wrong.”  You would create the following table and put it in your test.

ErrorMessage
Messages
Something went wrong.

When you run your test, if that is the only error you get, the row with the error message turns green.  You were expecting the error message and you got what you expected.

ErrorMessage
Messages
Something went wrong.

Now lets suppose that you ran the test where you were expecting the error message “Something went wrong,” and for whatever reason that error message was not produced.  In that case the fixture would indicate that the error message was missing and turned the row red.

ErrorMessage
Messages
Something went wrong. missing

Here is what happens if you are expecting the error message “Something went wrong,” got that message, but also got the additional error message “Something else went wrong.”  The message you were expecting would be shown in green and the additional message would be shown in red as surplus.

ErrorMessage
Messages
Something went wrong.
Something else went wrong. surplus

When we ask someone about the second error message, we learn that it was due to a change in requirements (of course this would never happen in the real world) and that it really should be there.  Now we can add that row to our fixture as an expected error message as shown below.

ErrorMessage
Messages
Something went wrong.
Something else went wrong.

The next time we run the tests, we are expecting both error messages and getting both error messages so everybody is happy.

ErrorMessage
Messages
Something went wrong.
Something else went wrong.

The book Fit for Developing Software discusses error reporting in a ColumnFixture and in an ActionFixture.  We built a number of DoFixtures to execute actions in our application.  When the action caused an error, the Error Message fixture displayed the error message.  We found this method very effective for testing error handling in the application. 

Get Fit ( but not necessarily FitNesse )

December 4, 2007

As I said in “About Me”, I believe in using whatever tool makes sense for what you are trying to test.  In a recent situation, we needed to test a “rules engine” for modifying URLs.  The test cases were pretty straight forward.   They were the current URLs and what we expected them to be after being modified by the rules engine.

The test tool we chose for this situation is “Fit”.  You may have heard of Fit and FitNesse before.  FitNess is the wiki that was originally used to drive Fit. For background information on Fit and FitNesse, go to http://fit.c2.com/

There are a number of drawbacks of using FitNesse to drive the tests.  The two most significant ones are that it is difficult to do version control of the test cases and that only one set of tests can be run from the wicki at a time.

The solution was to run the tests with Fit but not use FitNesse.  This solved the two main problems of using FitNesse.  First, the tests could be put under version control.  Second, anyone can checkout the tests and the Fit driver to their system and run the tests without worrying if someone else is running the tests.

We chose Fit because the test cases are pairs of inputs and expected outputs.  Fit allows you to specify a table with a column of inputs and a column of expected outputs.  Each row is a test case with input and the expected output.  We knew from the beginning that we were going to have a large number of test cases.  Using a table structure made it easy to manage data for the tests.  The table below shows an example of what inputs and expected results might look like.

inputurl1 expectedoutputurl1
inputurl2 expectedoutputurl2
inputurl3 expectedoutputurl3

Anyone who is familiar with the application should be able to understand the inputs and expected results.  Developers as well as testers are able to create test case pairs.  Developers can run the tests on their system while they are writing new code.

The test cases we developed fell into four categories.  The first was the existing URLs that we expected to be modified in a certain way.  The second was the existing URLs that we expected to be unmodified by the rules engine.  The third set of cases were ones where there were currently not any existing URLs but test URLs that should be modified in a certain way because of the rules.  The fourth set of cases was currently non-existent URLS that should not be modified by the rules engine.

We have implemented this testing system for the rules engine and it has proven effective.  There are about 300 test case pairs.  The test cases were developed by two testers and three developers.

For a further discussion of where Fit can be used as an effective testing tool, see the following link:

http://awta.wikispaces.com/Fit+-+Fitness+Implementation+Strategy