Curious about automated user-interface-level (UI) testing? That’s good, curiosity is where it all begins, and you’ve come to the right place. The next step can be the most daunting.
The purpose of this post is to provide some high-level strategies and encouragement to get you started on your journey.
Let’s get a couple of things on the table to avoid potential confusion. First, our central focus will be on automated UI-level tests. Some of the concepts and ideas will naturally bleed over into code-level unit tests and service-level integration tests, and we’ll discuss these aspects of testing with UI features in mind.
Second, these high-level ideas come from our own experience and do not always translate to your unique business processes, operations, and technology needs. As Kaner, Bach, and Pettichord reiterate in Lessons learned in Software Testing,
“. . . the concept of best practices doesn’t really exist devoid of context.” (1)
Now that you know what you’re in for, we hope that you’ll find conversation starters, thought provokers, or otherwise useful nuggets to kickstart your transformation into automated UI testing.
Taking the plunge: Start with expectations and create a baseline
What do you hope to get out of an automated UI effort? Who is going to be writing the tests? How frequently do you envision them running? Who is going to consume the output and reports generated by them?
The goal here isn’t to have an exhaustive plan or answer all of these important questions right away. When you’re in the process of undertaking new business practices or adopting transformative technology, it is important to have some sort of starting point or baseline to compare subsequent changes. Committing thoughts and ideas along with notations of your business’ current testing environment to paper (physical or digital) can serve as that baseline.
Which technology/tool am I supposed to use?
It’s best to approach this question with an openness to all the different ‘tech flavors’ and be unafraid to make significant changes. This will directly impact initial expectations, particularly with regard to the skills necessary to author the tests and any supporting code. It’s also important to think about your own SDLC as a whole in this stage: are you considering transitioning to BDD? Is there already a solid process for deploying into which you need to mesh? How frequently are changes being pushed? Categorizing your options for testing technology will help answer some of these initial questions.
Categorizing technology options
SerenityBDD and Cucumber unlock the gherkin syntax for describing behaviors, but require coded hooks in order to become executable. Selenium WebDriver and Appium open the door to controlling browsers, mobile devices, and desktop apps with the most modern programming languages, but requires a unit testing framework in your language of choice to write the tests.
Record and playback tools, such as Katalon and TestComplete, boast “codeless solutions,” although you may end up in a situation where you are constantly re-recording scenarios depending on the app under test and the release cycle. This is by no means an extensive list of everything out there. As you stumble across others in your research, categorize additional options with those mentioned here.
I’ve picked a technology and someone to work with. Now what?
One of the most common mistakes we see is a lack of support to generate an enormous number of tests to convert a manual regression completely over to an automated one. Beyond the strong reminder that automated tests are not a complete substitute for manual tests, this can lead to a casserole of difficult-to-maintain artifacts that are constantly breaking the build.
Take it slowly. Work through those questions in the sections above within the scope of just a few tests. You will thank yourself in the long run if you’ve dealt with some of the pain points with a limited scope before trying to ramp up the volume. For example, if working with a web app, start with simple navigation tests, i.e., confirm that you can navigate to three different pages, including the homepage by checking for page titles. Keep these tests as current as possible while changes to the application are in progress. Focus on how and when you run these tests. Consider how you might add more tests. If the thought of more tests seems too painful, consider the alternative of a breaking down the process of conversion to even smaller steps.
Useful example of taking the plunge into automated UI tests
Let’s say that I’m a test manager at a company that builds technology solutions for healthcare providers. I’ve decided that I want to start experimenting with automated UI testing for one of the six different web apps currently under my purview. After considering the makeup of the whole team responsible for that app (BAs, scrum masters, application developers, testers, etc.), I’ve settled on the enterprising individual who will give this a shot. We discuss the current development and deployment processes, and, given the background of the project in question, decide that a Java project built on the command line best suits our technology and business process needs. After working through Selenium WebDriver tutorials, our test writer comes back to us with a project containing two tests: one that confirms that upon navigating to the homepage URL, the page title is accurate, and another that confirms the page title of the login page.
Over the next few weeks, we focus on running these two tests frequently, ironing out our own build process, and working with the application development processes to determine when and how our tests execute. We also work through a couple of different reporting methods while figuring out how to present, discuss, and store that data. When ready, we expand our two tests to ten and (once again) iterate on our processes and goals. We continue this cycle until we’ve got solid coverage with reliable tests and processes. Armed with our experience from the first app, we turn our attention to the next one.
Followup post: Automated UI Tests: Taming the Tangle
Kaner, C., Bach, J., & Pettichord, B. (2002). Lessons Learned in Software Testing: A Context-Driven Approach. New York, NY: John Wiley & Sons, Inc.