Ever tried to detangle a big box full of wires and cables to get to that one power cable? It often seems that no matter how much care we put into placing those cables into the box, we inevitably end up with a tangled mess that tests both the physical integrity of the box and our patience.
If you feel (or are starting to feel) that your automated UI testing efforts are like a box full of cables, then this post is for you.
Some initial thoughts
If you are coming from the preceding post, Automated UI Tests: Taking the Plunge, then you probably have a solid notion of what we’re going to express here. The ideas and advice are not designed to be a definitive list of items that, if followed, guarantee success in automated UI testing conversion.
Part of what makes the world of software development exciting is that every situation is unique and presents its own challenges. Let’s help get your creative juices flowing and provide the right context that can get you on a confident path of automated UI testing, with your own artifacts and processes.
Let’s start with flaky tests
If you are familiar at all with testing environments, you know which flaky tests we’re talking about. Sometimes they pass, sometimes they fail. There doesn’t seem to be a rhyme or reason for the pass/fail, and you find yourself holding your breath every time a build is triggered.
Flaky tests can be toxic to your efforts in several ways, but the most direct impact happens when the people responsible for that app begin to lose confidence in the automated tests.
Angie Jones delivered a fantastic talk at SauceCon 2017, full of strategies and conversation-starters surrounding this topic of the flaky test. The bottom line: don’t allow these to fester. The first steps are to isolate them and move them to another branch so that they stop poisoning a build that otherwise provides consistently valuable information.
Where the wild browsers roam
It can feel like graduation day when you’ve just finished a batch of automated tests; you have finally completed a phase of the project, and it’s ready to run for the whole world to see.
Then the reality of browser/device coverage sets in, and the companion-reality of run time rears its ugly head. For example, in the context of a web app, it is typically expected you have the ability to execute against at least the most current versions of the Big Four (Chrome, Safari, Firefox, and IE). You get to a point where spinning up virtual machines on your development box feels slow and inevitably hijacks your computer, making it difficult to continue working during a build.
Luckily, there are a number of solutions out there to address this problem. Cloud providers such as SauceLabs and Browserstack will work for some situations, while a hosted solution like Element34’s Selenium Box will work for others. If your needs are not horribly extensive, you could also look into standing up your own Grid.
A word of advice: you might find that if you developed all of your tests against one browser, then it may not behave as expected on the others. This could (and very well may) be its own post in the future. Keep in mind that you may need to add branching logic depending on the browser or device in question.
Additionally, the sentiments from the previous post are relevant here: if you are embarking on a new idea for multiple browser/device/host support, start with a small subset of your tests. It will be easier to work out the kinks of your brand-new Grid with a limited number of tests that can finish in a few minutes rather than testing your whole suite.
What’s in a framework?
“Framework” is one of those words in our industry that can mean radically different things to different people. We’re referring to it here as a repeatable, generic solution that helps you quickly bootstrap new projects.
No matter which technology and tools stack you are using, there are going to be ways to cook up little bits of the process that are transferable. Thoughts toward this method usually come around the same time you want to start adding automated UI coverage to the next app on your list. A good number of the frameworks we’ve built and seen hit these major feature points:
- Handle input and desired properties
- Do you want to be able to specify things like the browser, host operating system, versions, additional app binaries, and properties from an external configuration file? What would you prefer to be command-line arguments? What about external data files?
- Manage the object that controls the browser, device, etc.
- This is the WebDriver object in Selenium WebDriver world. How do you get this object and fire up the application? Do you want to abstract it behind a factory? Do you need to proxy any of the behavior to start and stop the browser, emulator, or simulator?
- Establish your reporting features of choice
- How are you currently digesting the output that your builds produce? Are the mechanisms that produce that output tightly coupled to the tests of a particular app? How would you abstract those reporting mechanisms so that they can be used for any app? If you’re entertaining different options here, give a look.
- Include examples
- What does a typical test flow look like? If you’re using page objects, what would a simple one include? This is not only useful for newcomers to the automated testing effort, but also serves as a good, executable reminder when starting a fresh project.
Closing remarks
Considering there isn’t a silver bullet solution to most of the problems in software development, detangling your own box of cables and wires doesn’t have a surefire checklist for success. At the very least, take comfort in the fact that you are not alone, and you can use the advice above to help get the wire-wrangling process started.