Achieving Software Testing Nirvana in a CI/CD environment with a sprinkling of Agile – Part IV

Part IV – Automated Testing

The Commit Stage

Testing Pyramid


What is the Commit Stage?

The commit stage is the time when a developer is happy with their production ready code and is ready to commit to their version control system. Some pre-requisites to a commit should be that the developer has confidence in the code that they are about to submit by making sure it is covered by unit, component and acceptance tests, and any static code analysis has been done with cyclomatic complexity measured etc. A developer may also make use of pre-tested commit to validate their code does not break anything in the main trunk, more on that in a moment.

Continuous Integration – Build

After a developer has checked into source control, a CI server can be set up to pick up those changes set up against a particular build. A build can be set up in such a way so that it can run the compilation of the code, add those binaries to an artefacts repository, run any static code checks, tools like NDepend, StyleCop and FxCop can be used for this. Next the build configuration should run the unit tests against the binaries created during compilation, if all passes, the component tests should be run against the binaries, if all passes component tests, it’s time to promote to the business focused acceptance test stage.

Something to keep in mind is that if a test fails during the unit or component testing, that you should not fail the stage, the testing should continue to run, so you can verify that all the rest of the tests still pass and focus on only the failing test(s) to fix.

Unit & Component Testing

Unit tests are used when we need to test the public behaviour of a method on a class. We don’t want to test the internals or the private methods and need to avoid using the likes of reflection in .NET to find internal and private method and test those. Internal and private methods are the implementation details, these are subject to change and when writing tests against these, the test will need to be updated with each change. So test only the public API exposed by your class or service, you are testing that correct return or manipulation is being met. For example, if you have a currency addition method that takes 2 values in one currency and gives you a result in another. You want to test that the currency coming back is the correct currency and the summation of the 2 values is correct in the expected currency, you don’t need to test the internal currency converter, as you should be automatically testing this by making sure the returned value is correct. Every layer of an application should have unit tests against all publicly exposed APIs written by the developers. Any method that has an interaction with another publicly exposed method in another class or method at a unit test level should be mocked.

Component tests are used when we need to test the interaction of one class with another or one layer of an application with another. The approach to how these should be written is similar to how one would write a Unit Test, except the interactions with layers below the component test are no longer mocked. The idea of a component test is to test the coverage between the different layers. Like Unit Tests, a component test should only be testing the publicly exposed API and making sure that the return part or manipulation of data is being met. If a method under test is interacting with many other layers or classes outside the scope of a feature/story then these APIs should be mocked and only the integration with the relevant components should be used to test the integration behaviour to deliver the feature/story.

Use of Pre-Tested Commit

As previously mentioned using pre-tested commit allows confidence that a check-in to source control will not result in a broken build, meaning if a developer checks-in just before leaving work at the end of the day, that a developer who may work remotely or in a different global location can pick up code that isn’t broken, otherwise if the checked-in code broke the build and the developer had gone home. The remote developer or developer in a different location may have to wait for the developer who broke the build to come in again and fix the issue. Pre-tested commit works by allowing a developer to check code into a CI server like teamcity, teamcity will then grab the latest source code from version control and check-in the code against that, without submitting to source control, then run the suite of unit and component tests against that code (depending on what you have set up on your CI server, it will use the build configuration). If all passes against the build configuration the developer will be committing against, the code will be safely checked-in to source control and the build run once again on CI. If the code fails the pre-tested commit step, then the developer is notified and the code is not checked into source control.

Scenario 1



Feedback & Trends

One of the most important parts of the continuous integration pipeline is feedback. During the commit stage coverage tools like dotCover can be used to check your code coverage and validate whether your unit and component tests cover a certain percentage of your codebase. If code coverage goes down, you should fail the stage; continuous delivery should be equipped to allow continuous improvement. You should also be keeping track of things like cyclomatic complexity going up and like before, the build should be failed if this increases as a result of a check-in.

The quicker the feedback, the quicker the fix or improvement becomes, the better the software becomes, confidence increases and ultimately, value is increased. Feedback gives us metrics; metrics are only good if we can put actions against them to improve. Real metrics reflect business value and not technology, these metrics should be measuring the value to the customer once we release to production. A story/feature is never done until it is released to the customer and providing feedback. Consistent process and effective feedback result in agility.

The Acceptance Stage

What are acceptance criteria

Acceptance criteria are the gates that define whether a story can be accepted as complete and would qualify as releasable. A story for example that allows a developer to login to a website may contain acceptance criteria, like;

  1. There is a username and password field that a user can enter their details
  2. If the user is not found, the user is notified that user was not found on the UI
  3. If the user is found and the password does not match, the user is notified on the UI, that the user was not found
  4. If the username and password match the details in the database, the user is allocated to their user group of acceptable permissions
  5. The user will be redirected to the secure section of the website
  6. The user session is created
  7. If the user is inactive for 15 minutes, log the user out.

These are all examples of both happy path and sad path tests that would be covered by a simple story such as that of logging into a website. Creating acceptance criteria shouldn’t be the sole responsibility of the Product Owner, it is a cross-functional activity and should involve a QA, the PO and a developer, the QA can decide what can be tested, the developer will know the work involved and the PO will know what they want from a story.

Agile Story

Front-end and Back-end API Testing

There a couple of avenues used in testing the front-end parts of an application, in relation to acceptance testing the front-end could be tested using tools like the popular Selenium or Watin for .NET, in terms of unit and component testing the front-end if it contains JavaScript, it could be tested using the likes of JUnit, Jasmine or Jest to name but a few. But in relation to Acceptance testing, we would use the likes of Selenium to run our automated tests and ideally together with the abstraction of the Windows Driver pattern, more on that shortly. Back-end API testing can also make use of testing using Selenium, if the application is loosely coupled and the backend is hidden behind a ReSTful API, if it is not hidden behind an API accessible over HTTP or TCP, then the best use of testing the backend will be through the front-end tests with Selenium.

Test Isolation

When running the acceptance suite of tests, it is pretty critical that the tests implement an effective setup and teardown and can run in isolation. This is important when the tests interact with the DB and update, remove or add rows of data. It is important for the tests to clean up once they complete. For example, one test may be to add a row of data, where as another test might be to check the row count. This is especially important when parallelizing tests or running tests separately across a build grid on a CI server. Unlike the commit stage where we should continue to run the rest of the tests if one test fails on the acceptance stage, due to the fact that most acceptance tests will take some time to run, it is important to failfast and report the issue back to the team, so the team can stop what they are doing and fix the issue and not wait for the rest of the build to complete, which could take hours.

Windows Driver Pattern

The windows driver pattern is a pattern used to abstract away the underlying connection to the page object where selenium will run, you will write your own DSL that will communicate with the abstracted Windows Driver and assert the various acceptance tests. An example would be a route to a login page, without the Windows Driver Pattern would have var route = new Route(“/login”);with the windows Driver Pattern we could use var loginPage = new LoginPage();The Windows Driver Pattern allows us to not change the tests if something on the UI changes, but change the abstraction instead, so our tests can remain intact.

Don’t rely on Asynchrony

Sometimes tests need to test methods that use threads or tasks and as a result there may be a delay, such as polling an application directory every few seconds for new files. The trick is to make the asynchronous calls seem synchronous, by invoking them one after another. If we must wait on a response, the best approach is to break the delay down into sensible waiting periods to check for example the file-system, instead of waiting say three seconds, we could break that down into polling the system every 100 milliseconds, until that threshold has reached 3 seconds.

Using Test Doubles

If your tests require communication with external third party systems that you don’t want as part of your tests, then the use of the Windows Driver Pattern can add further significance to its use, as you can you the abstraction to provide test doubles or the injection of mocks at runtime for the APIs you are not concerned about.

Testing integration with External Systems

There should still be tests that check for the existence and validation of the connections with the external systems, there shouldn’t be a major focus on these tests, but just a clarification that nothing has broken and the integration points are still available and working as expected.

Recording acceptance Tests

Sometimes it is difficult to see what went wrong with an acceptance test, when one fails during the acceptance stage, because well put simply, you didn’t see it fail, you only get access to the report afterwards. A useful tool that can help with this is called VNC2Swf, this can be used to record UI tests as they are actioned and completed, and if one where to fail, you can get a screen capture recording of all the events that took place during the automated test run, so you can pinpoint the cause of error, whether the automated test could not pick up a UI element such as the password entry box or didn’t select the login button, because the elements ID has changed with the latest check-in and the Windows Driver abstraction layer wasn’t updated to reflect this.

Compatibility Testing

Acceptance tests should be repeatable so that they can be automated against all browsers such as IE, Chrome and Firefox and mobile browsers if targeting such platforms.

Exploratory Testing

Exploratory Testing

What is it?

Exploratory testing is a test technique whereby the tester tests by learning, the more tests that are done, the more the tester learns and the more new tests are created as a result of the experience gained. The key to exploratory testing is the cognitive engagement of the tester. Manual testing is a technique used is exploratory testing.

Manual Testing

Manual testing is adapted to allow a tester to verify the behaviour of the application by verifying that any new features that have been added as a result of a check-in meet the acceptance criteria outlined by that particular features story. In an automated testing environment on Continuous Delivery pipeline, manual testing can be done to 1) Regression test and 2) validate anything that cannot be covered by Automated Acceptance tests. This might be where a tester may need to validate that by acting upon one piece of functionality in the system under test will cause another piece of functionality to behave in a certain way on a third party systems GUI. Something that may not be easily automated.

Relying on Manual Testing can be costly!

The more the reliance on manual testing, the longer it will take for a feature to be released to production, which means the longer it will take for feedback to come back from our users and the longer the business has to wait for those potential financial wins. If something simply can’t be added to an automated acceptance test, then it will have to be manually tested, but the main goal should be automation, which will mean more focus on development of features rather than testing later on in the pipeline. In the book Continuous Delivery Humble/Farley, one project they reported cost the business £3million prior to each release just to cover manual testing.

Capacity and Security Stage – Non Functional Requirements

What is Performance Testing?

Performance testing is a catch all term. Performance is really a measure of the time it takes for a single transaction to go from one point in the application to the other and can be measured in isolation or under load. What the business is really interested in is throughput or capacity testing. Throughput is the number of transactions an application can handle in a given time frame while maintaining an acceptable response time for each individual request.

Re-use of Acceptance Tests

One way to gauge the performance of an application is to re-use the already written acceptance tests, but to actually repeat them over and over again to gain a metric on how much capacity your application under repeated load against a like-live environment can handle. It is pretty important that application under test, is as close as possible, if not an exact replica of the production environment, but with the new features added to test that the new features do not add further strain to the environment or lower the strain against the environment. You shouldn’t have to write new tests for capacity testing.

Non-Functional Requirements

This is something that can be covered by acceptance testing, but the aim of NFRs is to verify things like TLS (Transport Layer Security) is in place, and that only certain transactions can be transmitted over https. Testing things like what can be and can’t be cached. Or that only certain environments are accessible by your application, this would be testing that a firewall is in place to block certain requests from certain points.


Staging/UAT Stage

Once all testing has been completed, this is sign-off stage; the product owner can sign off a feature that has made it successfully into this environment and demo it to the customer or business. It’s important again, that this environment is production like and to that, in a lot of cases the Capacity Testing environment is an acceptable environment to run as a staging or UAT environment, but to that, it is important that all state has been cleaned up or torn down from the capacity and security tests.


Thanks for reading and keep building awesome.

Until next time!

Ian B

P.S. You can also find me on Twitter: @ianaldo21

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s