A large challenge with creating any test automation suite is ensuring simultaneous scalability, maintainability, and reliability. Engineers need to spin up new tests quickly and consistently. New builds might, and often do, break things in existing automation, but resolution should take minutes or hours, not days or weeks. Furthermore, automation that works 80% of the time is really not useful to anyone.
But this is only one part of the whole story. While the same challenges that we’ve always faced still exist, an automation suite around Selenium WebDriver needs to further address the measurement and reporting of test coverage and test success as well as build a strategy for change tolerance through object recognition and code maintainability. Such challenges are absolutely surmountable and worth tackling to unlock the power and accuracy that an automation suite built around Selenium can provide.
The keys to unlocking success with any Selenium implementation are good design and process. Testers may not be developers, but adopting tried and true development practices can go a long way in such an endeavor.
Keyword driven testing (KDT) enables an organization to separate the design and documentation of test cases from the data they consume and the mechanism by which they are executed. KDT, in its goals, is similar to the model-view-controller (MVC) pattern of software design. Much as MVC addresses the need to separate presentation from data modeling from functionality, KDT allows us to separate tests from data from execution.
KDT is quite different from a lot of other test automation design approaches in that it has a strong focus on design and business logic, rather than focusing on the technical aspects of an application under test. The idea is to model application behavior with your tests while hiding the technical details of test implementation behind the scenes, exposing only that information which directly impacts a keyword’s usefulness to modeling business logic. So how do we do this?
Develop your framework. Unlike many other popular approaches to test automation, KDT is not as strongly supported by third party tools and frameworks. While the end game is to produce less technical test automation, this is accomplished with some technical magic behind the scenes in which you’ll need to invest up front.
A successful KDT framework implementation needs to do a handful of things very well:
There are several things that Keyword Driven Testing truly excels at:
Successful KDT implementations are driven by strong design practices. To leverage the benefits of this approach, establish practices and process around your keyword design process. Create test cases at a high level that provide good test coverage and follow up with a detailed design of all of your keywords before writing any code.
There’s no “one size fits all” test automation approach and as such, it is important to know the weaker points of any design pattern.
Page Object Model is a design pattern which, in several key ways, is juxtaposed with KDT. While KDT models application behavior and business logic, we do the exact opposite with Page Object Model. This approach seeks to create page objects which model the UI components of our application in order to accomplish two things in particular. First, we expose the services of a page to the test developer as a sort of API. Second, we abstract the deep knowledge of the page structure away from the automated test itself.
A successful page object implementation reduces maintenance costs by creating a clear separation between what our interface does or enables our user to do and what a page looks like. If business logic remains relatively stable the presentation layer of the application can change significantly without creating much rework.
Successfully implementing the Page Object Model will:
There are some disadvantages to the Page Object Model approach, but most involve dealing with edge cases or uncommon scenarios. The real disadvantage of this approach comes from the high degree of technical knowledge required to implement and maintain it. Page objects make no attempt to close the gap that exists between business logic and test automation. Test creation and execution are both technical tasks under this approach.
BDD combines the principles of test driven development with ideas from domain-driven design to enable collaborative software development. This is accomplished primarily by establishing a domain specific language (DSL) that allows teams to express the behavior and expected outcomes of application functionality through natural language.
This means that tests can express conditions, actions, and outcomes in a contextual and understandable way with the technical details of their implementation hidden in the background.
BDD does an excellent job of closing the gap between test automation and business logic, but the true power comes in how flexible it is on the back end. The most amazing part is how seamless it can be to bring these pieces together in a technology stack that makes sense in your organization.
An implementation of BDD will:
While the concept of BDD might sounds attractive to nearly everyone, it necessitates a huge paradigm shift in the way we build software. This gap is smaller for organizations actively leveraging a test driven development methodology, but for those that have completely different models for developing and deploying software, the change is quite considerable. Similarly, BDD requires that we change our culture and mentality around creating software in a pretty significant way.
Beyond the huge paradigm shift that some organizations would experience, BDD also requires that we introduce additional tools into our technology stack, and in some cases this may be undesirable. For more information, look into tools like Cucumber or RSpec as options that strongly enable the creation of DSL and the implementation of logic behind it.
Properly evaluate your options and settle on the approach that best suits your organization. The first step is properly select a tool or set of tools to enable the automation of your application functionality, then carefully gather your requirements.
Make sure that you do not fall into the trap of creating one-off test automation scripts that address only specific application requirements. Sometimes, organizations mistakenly conclude that they need to reuse existing manual test suites that were not designed with modularity and reusability in mind, or that their applications are too unique or complex for generic design approaches to work. The long term cost of creating bulky and inefficient test automation scripts will outweigh the cost of immediately designing good test automation. Strong design is your number one key to success.