Pages

Wednesday, March 18, 2015

Contract testing

Let's imagine the following scenario...
We are working in distributed system with lots of applications.
The developers understand the importance of avoiding coupling amoung componets, so they decide to create restful applications to communicate via xml and json,
instead of building applications that are binary dependant with other applications.

During the development of a feature, the development team, did a change to the API, and unconciously they broke one of the consummer apps.
Unfortunately, this bug was really expensive, since the company just managed to discover it in its replica, pre-production environment by a long running
end to end functional test, after determining that what was broken was actually a marshaller of xml, there was no quick fix and they had to roll back.

In the root cause analysis meeting, developers from each of the teams, that own the apps that failed realised that the API change was the reason for the bug
and that there was no aditional work done in one of the unmarshallers.
The developers were told to fix the bug and also to come up with a solution that would avoid this from happening again.

After fixing the bug the developers toke some time to think how they could catch this kind of bugs before the pre-production environment where the expensive
integration tests run. One of them said, "What we need is consummer contract testing!"...

Consumer contract testing, allows consumers and providers of an API knowing if their latest changes on their marshallers or unmarshallers, could potentially be
harmful for the other party, without the necessity of performing an integration test. This is how it works:


1- The provider of the API, publishes an example of the API somewhere where he knows the consumer can access it(e.g publish it in a repo, sending it via email...).
2- The consummer takes the API example and writes a test that tolerantly accesses the values of interest.
   This in-document path(e.g xpath,jsonpath...) used to retrieve the values from the API example, is known as the contract.
3- The consummer publishes the contract in a place where knows the provider has access to it(e.g publish it in a repo, sending it via email...).
4- The provider will take the contract, and will use it in a test, against the generated output of the application. If the test fails when being run, the provider will know that they could potentially be breaking the the consumer, if they were to release the current version under test(a negotiation can take place).

Let's now have a look at a practical example of each of the steps above.

1- The developers that own the provider app, take from their passing acceptance test the output that the application is sending back to the consumer and they save
it into a file called "apiexample.xml", which looks like this:

 <output>  
      <content>  
           <partA>A</partA>  
           <partB>B</partB>  
      </content>  
 </output>  

They send this file over email to the team that owns the consumer application.

2- The developers that own the consumer app, will take the exampe and will write queries to it, to determine the contract they need. A unit test against the example, could be fine.

 @Test  
    public void apiExampleGeneratesValidatesToContract() throws Exception {  
     XPath xPath = XPathFactory.newInstance().newXPath();  
     String value = xPath.evaluate("/output/content/partB", getSource(readExample("apiexample.xml")));  
     assertThat(value,is(notNullValue()));  
    }  

3- Now that the developers know that the contract to access what they are interested in is:
 "/output/content/partB"
They can save it in a file called "contract.txt" and send it over email to the other team for they to make sure they will always be outputing according to the contract. Note that this tolerant
paths, allow to the provider to change any part of the API they want to change, as long as the contract is respected.

4- The provider will read the "contract.txt" file and will write a test where the contract will be applied to the applications output.

 @Test  
    public void apiExampleGeneratesValidatesToContract() throws Exception {  
     XPath xPath = XPathFactory.newInstance().newXPath();  
     String value = xPath.evaluate("/output/content/partB", getSource(readExample("apiexample.xml")));  
     assertThat(value,is(notNullValue()));  
    }  

Now when any of the teams run their builds, they will know if they are in breaching the contract and they will avoid the bug going further than the development environment.

You can find the complete source code of this example here.

Wednesday, March 11, 2015

Yet Another Blog Article About Acceptance Testing


Acceptance tests are tests conducted to determine if the requirements of a specification are met.
In modern software development, we call this specification, acceptance criteria.

“Whenever possible” it would be desirable to acceptance test, the system end to end.
By end to end, I mean talking to the system from the outside, through its interfaces.

Note that at the beginning of the previous paragraph, I said “Whenever possible”.
The reason for this, is that it would be risky and also costly to integration test our code(against other code, we don't control/own). Sometime applications within a system, don't even belong to our company or they are too costly and slow to run. Because of this, the amount of system full stack tests/functional tests, should be very reduced/almost none.

In acceptance testing we often start from an assumption about those external systems we cannot control. The parts out of our control are faked and the acceptance criteria, is aimed to those parts we control.

When writing an acceptance test, there is a commonly used format to define the acceptance criteria. It is well known as the “given,when,then” format:

- given: The setup/preconditions, of the scenario that we will test. Its contains what is that we expect from those remote systems(either internal or external) on which we depend.
- when: Is the specific call to the exposed interface we are testing.
- then: Is the validation of the results.

Today's acceptance test are written with the help of live specification frameworks, such as: Jbehave, Fit, Fitnesse, Concordion, Yatspec...
The use out this tools, will make easier to both understand complex scenarios and maintain criteria.


Understanding Yatspec

Next I will talk about writing acceptance tests with a popular live specification framework called Yatspec. I will explain some of its features and describe the way it presents the test report. Also I will explain with an example how we could stub systems out of our control and use them in our acceptance test.

About yatspec
-
its a Live specification framework for Java(https://code.google.com/p/yatspec/)
-produces readable Html
-supports table/parametrized tests
-allows writing in given-when-then style

 
The scenario
The application we will be testing, will receive a GET request from a client, then it will send subsequent GET requests to two remote systems(A and B), process the responses and POST the result to a third system(C), just before returning it to the client.



The criteria
-Given System A will reply 1 2 3
-And System B will reply 4 5 6
-When the client asks for the known odd numbers
-Then the application responds 1 3 5
-Then 'System C' receives 1 3 5


Creating html reports
Before going in depth into our example, I want to expend some time discussing how Yatspec reports look like, and what are the basics in order to create them(If you want to go directly to the scenario implementation, just skip this section).

When a Yatspec specifications are run, it will generate a html report. Advance options, can allow you to publish it remotely, but by default it will be written to a temporary file in the file system.
The terminal will tell you where it is like this:
Yatspec output:
/tmp/acceptancetests/KnownOddNumbersTest.html
We can navigate to it from the browsers url:
file:///tmp/acceptancetests/KnownOddNumbersTest.html

Lets have a look at how it is structured:


(a) Is the title of the report. If Yatspec finds the postfix 'Test' on the class name, it will remove it and just present the rest of the title.

 @RunWith(SpecRunner.class)  
 public class KnownOddNumbersTest extends TestState {  
      //Your tests  
 ...  
 }  


(b) In the contents section you will see a summary of all the test names(There can be multiple tests) in the same specification.



(c)This is the test name. We don't need to add any additional, anotations, all we need is to write our test names in “camel case”. If the test throws any exception, it will not be shown in the report.


 @Test  
 public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
       //Test body...  
 }  


(d) At the beginning of each test, the criteria will be presented. Yatspec will use the contents of the method body to generate it. The methods given(), and(), when(), then() are inherited from TestState.java(latter I will explain how to use them).

 
 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     when(aRequestIsSentToTheApplication());  
     then(theApplicationReturnedValue(), is("1,3,5"));  
     then(systemCReceivedValue(),is("1,3,5"));  
   }  

(e) This is where test result will be shown. Yatspec will colour this part in green if the test passes , in red if the test fail or in orange it the test is not run.

(f)Interesting givens are the preconditions for the test to run. This preconditions are stored in the class TestState.java in an object called interestingGivens. The way we would commonly do this by passing a GivensBuilder object to the the method given(). Also the method and() can be used to add more information in our interesting givens.
 
 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     //...  
   }  
   private GivensBuilder systemARepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system A returns", numbers);  
       return givens;  
     };  
   }  
   private GivensBuilder systemBRepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system B returns", numbers);  
       return givens;  
     };  
   }  

(g) This are the captured inputs and outputs. Its purpose is to record values that go in or out of any component in the workflow. TestState.java contains an object called capturedInputsAndOutputs to which we can add or query from. Comonly we would indirectly add a value to the capturedInputsAndOutputs to track the response of our application so it can be verified latter, via a parameter of type ActionUnderTest.java to the when() clause method.

 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     //...  
     when(aRequestIsSentToTheApplication());  
     //...  
   }  
 private ActionUnderTest aRequestIsSentToTheApplication() {  
     return (givens, captured) -> {   
 //The second object of this lambda is capturedInputsAndOutputs  
       captures.add("application response", newClient()  
           .target("http://localhost:9999/")  
           .request().get().readEntity(String.class));  
       return captures;  
     };  
   }  


(h) This are the final verifications. They are created by the then() method. You will distinguish if the output was generated by the then() method, because it is not highlighted in yellow.
An StateExtractor.java is responsible for the values in this section. The state extractor will take from the captures the values that where recorded previously so a matcher can verify if they are correct.


 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     //...  
     then(theApplicationReturnedValue(), is("1,3,5"));  
   }  
 private StateExtractor<String> theApplicationReturnedValue() {  
     return captures -> captures.getType("application response", String.class);  
   }  
 }  

The scenario implementation
Now that we understand the criteria and we have some basic understanding of Yatspec reports. Lets write an acceptance test for the criteria described before.

In our scenario System A, B and C are out of our control(Lets imagine they are owned by companies). We need to first query A and B and then send the processed result to C before replying to the client.
This means that our interesting givens will be the values returned from A and B and our captured inputs and outputs will contain the input into C.

 
So let's have a look at how Systems A and B return the values previously saved in the interesting givens to the application and also how System C captures the input.

For this example, I created a class called FakeServerTemplate.java which contains the boiler plate code that is necessary to create an embedded server. Each System A, B and C will inherit from it and provide specific handler implementations.

 public abstract class FakeSystemTemplate {  
   private final HttpServer server;  
   protected InterestingGivens givens;  
   protected CapturedInputAndOutputs captures;  
   public FakeSystemTemplate(int port, String context,InterestingGivens givens, CapturedInputAndOutputs captures) throws IOException {  
     this.givens = givens;  
     this.captures = captures;  
     InetSocketAddress socketAddress = new InetSocketAddress(port);  
     server = HttpServer.create(socketAddress,0);  
     server.createContext(context, customHandler());  
     server.start();  
   }  
   public abstract HttpHandler customHandler();  
   public void stopServer() {  
     server.stop(0);  
   }  
 }  


Latter, when we create the acceptance test we will see how we will pass the interesting givens and the captured inputs and outputs to the Systems.
Systems A and B will return the values stored in the interesting givens using a unique key(Latter we will see how this keys are set in the givens).


 public class SystemA extends FakeSystemTemplate {  
   public SystemA(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       String response = givens.getType("system A returns", String.class);  
       httpExchange.sendResponseHeaders(200, response.length());  
       OutputStream outputStream = httpExchange.getResponseBody();  
       outputStream.write(response.getBytes());  
       outputStream.close();  
       httpExchange.close();  
       captures.add("output from system A", response);  
     };  
   }  
 } 
 
 public class SystemB extends FakeSystemTemplate {  
   public SystemB(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       String response = givens.getType("system B returns", String.class);  
       httpExchange.sendResponseHeaders(200, response.length());  
       OutputStream outputStream = httpExchange.getResponseBody();  
       outputStream.write(response.getBytes());  
       outputStream.close();  
       httpExchange.close();  
       captures.add("output from system B", response);  
     };  
   }  
 }  


For system C we will be capturing the arriving input.

 public class SystemC extends FakeSystemTemplate {  
   public SystemC(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       Scanner scanner = new Scanner(httpExchange.getRequestBody());  
       String receivedMessage = "";  
       while(scanner.hasNext()) {  
         receivedMessage += scanner.next();  
       }  
       scanner.close();  
       httpExchange.sendResponseHeaders(200, 0);  
       httpExchange.close();  
       captures.add("system C received value", receivedMessage);  
     };  
   }  
 }  


Now that our remote systems are ready, lets write our test.


 @RunWith(SpecRunner.class)  
 public class KnownOddNumbersTest extends TestState {  
   private SystemA systemA;  
   private SystemB systemB;  
   private SystemC systemC;  
   private Application application;  
   @Before  
   public void setUp() throws Exception {  
     systemA = new SystemA(9996, "/", interestingGivens, capturedInputAndOutputs);  
     systemB = new SystemB(9997, "/", interestingGivens, capturedInputAndOutputs);  
     systemC = new SystemC(9998, "/", interestingGivens, capturedInputAndOutputs);  
     application = new Application(9999, "/");  
   }  
   @After  
   public void tearDown() throws Exception {  
     systemA.stopServer();  
     systemB.stopServer();  
     systemC.stopServer();  
     application.stopApplication();  
   }  
   @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     when(aRequestIsSentToTheApplication());  
     then(theApplicationReturnedValue(), is("1,3,5"));  
     then(systemCReceivedValue(),is("1,3,5"));  
   }  
 }  


By extending TestState.java we get acces to the interestingGivens and capturedInputsAndOutputs objects. We will pass them to the remote systems, this way Systems A and B will be aware of what we expect them to return and also C will be able to capture its input.

The methods used inside given(), and(), when() then() are just static fixture methods. I think it good to avoid making long classes so that's why the test class just contains the test, everything else is extracted into reusable fixture methods. Lets have a look at them.


 public class GivensFixture {  
   public static GivensBuilder systemARepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system A returns", numbers);  
       return givens;  
     };  
   }  
   public static GivensBuilder systemBRepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system B returns", numbers);  
       return givens;  
     };  
   }
 
  public class WhenFixture {  
   public static ActionUnderTest aRequestIsSentToTheApplication() {  
     return (givens, captures) -> {  
       captures.add("application response", newClient().target("http://localhost:9999/").request().get().readEntity(String.class));  
       return captures;  
     };  
   }  
 }
 
 public class ThenFixture {  
   public static StateExtractor<String> theApplicationReturnedValue() {  
     return captures -> captures.getType("application response", String.class);  
   }  
   public static StateExtractor<String> systemCReceivedValue() {  
     return captures -> captures.getType("system C received value", String.class);  
   }  
 }  


Once we run the application, the acceptance test would go red, the next thing to do if we were parcticing ATDD, would be to go into the production code and write unit tests to guide the creation of the code that is required to make the acceptance go green. Remember the ATDD cycle.

 
The TDD of the final solution is out of the scope for this blog post, but you can find all the completed codes at this git repo:



Share with your friends