Testing automation has become a fundamental step of the software development life cycle. It’s essential for ensuring a high software quality and for reducing the costs that a large and repetitive suite of tests entails.
While no one can argue with the benefits, an often overlooked and underestimated aspect is the cost.
Even if one limits the focus to the specific context of regression testing, one of the areas where testing automations has been proven to be the most cost-effective—a major obstacle to large-scale adoption—is the effort to ‘write’ the test cases that need to be automated.
This is the information to be specified for each scenario being tested:
- The inputs
- The actions to be performed
- The expected output
Writing the test cases should be part of the development process, adopted in at the beginning of a project.
The number should grow in parallel to the number of features implemented in the software.
Key elements to manage this complexity and ensure this process is cost-effective include:
- Building infrastructure that makes the writing of new test cases as simple as possible
- Focusing on realistic use cases, particularly the ones most frequently used by customers in their production environments
From the early stages of the re-engineering of the Anvil Trade Processing solution, ION’s Secured Funding team followed this approach.
We refined this as we have built more functionality into the solution and extended it across the entire range of Secured Funding products. This includes applying testing automation infrastructure that we can use easily to create test cases covering the whole product stack from end-to-end.
This has allowed us to evolve the solution continuously with limited regression risks.
This is all well and good, but the approach has a limitation that is often overlooked.
When developing new functionality, we cannot know the specific set of input data that our customers will use.
Clients are aware of that limitation and perform their own UAT before promoting new versions in production, but this is far from covering all the possible real use scenarios.
In recent years, customers’ interest in the possibility of using ION’s own Test Automation infrastructure to supplement and implement their own regression testing has increased.
With the goal of validating new versions before deploying to production, customers want to use their own input data and a fully representative sample of the set of actions they undertake.
As with the actual pre-released version of the solution, the main downside is the high cost of implementing a wide set of test cases from scratch.
A radical new approach to addressing this problem requires answering this question: how can we automate the writing of test cases, in addition to their execution?
When software is used in a production environment, we can reasonably assume that:
- The input data fed into the solution is consistent in its structure and makeup (even if the content will vary over time)
- The set of actions performed over a certain observation period (e.g. daily or weekly) does not vary wildly (even if some actions are performed at different frequencies e.g. hundreds of times a day or at month-end)
- The output produced by the solution is correct—consumers of that data can rely on the output to be accurate, well-formed, and valid
Under these assumptions, the most efficient way to collect a wide and—most importantly—valid set of test cases for a given business lies in capturing the input data, the actions, and the output data from the solution version running in production.
Our team has started experimenting with this approach by instrumenting the solution in a way that allows us to test this, and we have obtained very promising results.
Through the tools that we’ve built, we are currently able to record the trade booking scenarios executed in production over a period of trading activity and automatically convert them in test cases, which can be automatically executed in a production mirror environment.
We have benchmarked it for a standard customer, for which the data captured over one week corresponded to several thousand trades and events booked.
The end-to-end process of generating the test cases from the available data, executing them with our infrastructure, and validating the results against the actual production information took less than 10 hours, with room for further optimizations.
Managing such extensive testing activity manually—from capturing the data, to executing the tests, to validating the results—would be simply prohibitive.
These encouraging results indicate that there are big improvements in efficiency for testing automation tools, particularly in relation to regression testing. All it takes is investing in the right infrastructure.
We are now starting to roll this capability out to our customers and expect this to become a key part of ION’s Secured Funding product offering.
Learn more about our solutions
Our clients use a tailored combination of ION standard products and services that cover real-time trading checks, automated testing, annual stress-testing and reports, and smart safety monitoring.