Boozang tests are broken into modules, which corresponds to functional areas of the application to be tested. This increases test re-usability, keeps you test organized, and encourages good testing practices.
This is an object-oriented approach to testing. Just like your application is divided into modules and sub-modules, so should your tests. This allows you to create a mapping between any application function to a corresponding test. For instance, for a project management application, you will create a “Project” module in Boozang. Inside the module, you might have the operational tests “Create Project” “Edit Project” and “Delete Project”, and the test suite “Create Project Loop with Cleanup”. By organizing your tests in this fashion, you can build upstream tests based on these operational tests, and easily test business requirements by combining tests into higher order tests.
Learn more about module support in our documentation: http://docs.boozang.com/#modules-and-tests
A big problem with several test-automation platforms is that tests that work on one platform might fail on another, or fail intermittently. This is often because of load times. When an element hasn’t been fully loaded in the DOM, the test automation tool will throw an Element Not Found exception or similar, and the test will give a false negative. This can be helped using using assertions or explicit waits, but this is time-consuming and requires discipline on the test author.
In Boozang we have built this into the tool. Boozang automatically waits for a pre-configured time and re-tries when it cannot find an element, making the test execution stable regardless of the performance of the target application. This means that a test recorded on a development environment will run equally stable on staging or production.
Moreover, it is also possible to increase delays or time-outs on a per-action basis. This allows you to handle exceptional cases, such as when a sleep must be induced to wait for a synchronization event.
The main difference with Boozang from other test tools is the introduction of a new type of element selectors. The selectors are based on natural language and only uses element attributes, such as “id” or “class” as fallback. This means that Boozang records the test just like a human would. A test in Boozang where the button “Create Project” is clicked simply records as “Click Create Project”. The advantage of this is that tests remain stable to changes “invisible” to the human eye, creating an automation approach mimicking manual testing.
The advantages of this is not only superior execution stability. Going away from XPath and CSS selectors allows us to classify functions at a deeper level, which allows us to generate a number of test scenarios without human interaction. We have also noticed a number of other upsides, like being able to do AI repair, and to match data to forms perfectly.
Even though natural-language tests and automated waits make test runs stable, there might still be breaking code changes that produces a false negative. One example is when a button label has been changed, or something has moved significantly in the DOM tree. In these case, it’s important to be able to quickly repair any broken tests.
Boozang solves this problem by introducing a Repair mode. The repair plays the test just like normal playback, but instead of throwing an error when an element isn’t found, it shows a dialog that allows you to re-select the element. This allows a user to quickly update all automated tests after a big application update or even a complete change of user interface framework. For instance, when a customer migrates an application from legacy Java to a single-page application in React or Angular, it is possible to use this function to repair all breaking changes that one introduced. This means almost complete re-use of all tests, granted that business requirements stays the same between the two paradigms.