Friday, March 27, 2020

New integration test framework in Collabora Online.


At Collabora, we invest a lot of hard work to make LibreOffice's features available in an online environment. Recently we greatly improved the Collabora Online mobile UI, so it's more smooth to use it from a mobile device. While putting more and more work into the software, trying to support more and more different platforms, we need also to spend time improving the test frameworks we use for automatic testing. These test frameworks make sure that while we enrich the software with new features, the software remains stable during the continuous development process.

End-to-end testing in the browser

One step on this road was the integration of test framework into Collabora Online code. is an end-to-end test run in the browser, so any online application can be tested with it. It mainly allows us to simulate user interaction with the UI and check the event's results via the website's DOM elements. That allows us to simulate user stories and catch real-life issues in the software, so our quality measurement matches the actual user experience.

When I investigated the different alternatives for browser testing I also checked the Selenium test framework. I didn't spend more than two days on that, but I had an impression that Selenium is kind of "old" software, which tries to support many configurations, many language bindings which makes it hard to use and also makes it stuck in the current state, where it is. While is a newer test framework, which seems more focused. It is easier to integrate into our build system and easier to use, which is a big plus because it's not enough to integrate a test framework, but developers need to learn how to use it too. I saw one advantage of Selenium: the better browser support. It supports all the main browsers (Chrome, Mozilla Firefox, Safari, Internet Explorer), while mainly supports only Chrome, but it improves on this area. Now it has a beta Mozilla Firefox support. So finally I decided to use and I'm happy I did that because it nicely works. in Collabora Online

So is now integrated into the Collabora Online code and we already have 150 tests mainly for mobile UI. As we submit most of our work to upstream, these tests are also available in the community version of the software. It's integrated into the software's GNU make based build system, so a make check will run these tests automatically. This is also part of the continuous integration system, so we can catch any regression instantly, before it actually hits the code. It's recommended to all developers of the online code to get familiar with the test framework, so it will be easier to understand if a test failure indicates an issue in their proposed patch. There are a set of useful notes in the source code, in the readme file: [source_dir]/cypress_test/README. Next to that, I try to add some good advice in the following paragraphs, how to investigate if any cypress test is failing on your patch.

How to check a test failure?

Interactive test runner

When you run make check the cypress tests are run in headless mode, so you can't see what happens on the UI, while the tests are running. If you see a test failure, the easiest way to understand what happens is to check it in the interactive test runner. To do that you can call make run-mobile or make run-desktop depending on what kind of test you are interested in. In interactive mode, you'll get a window, where you can select a test suite (*_spec.js file) and run that test suite only.

After you select a test suite you'll see the tests running in the browser. It's fast so probably you can't follow all the steps, but after the tests are finished you can select the steps and check screenshot for every step, so you can follow the state of the application. This way you can see how the application gets to a failure state.

Can't reproduce a failure in the interactive test runner

Sometimes, it happens that a test failure is reproducible only in headless mode. There are more options, that you can do in this case. First, you can check a screenshot taken at the point when the test failed. This screenshot is added automatically into a separate folder:


This screenshot shows only the failure state, which might not be enough. You can also use the cypress command log to write out important information into the console during a test run. You can do that using the cy.log() method, called from the JS test code (this is not equivalent to console.log() method). In the case of test failure, these logs are dumped on the console output. These logs are also available here:


A third option is to enable video recording. With video recording, the cypress test framework will generate a video of the test run, where you can see the same thing that you would see in the interactive test runner. To enable video recording you need to remove "video" : false, line from [source_dir]/cypress_test/cypress.json file. After that, running make check will record videos for all test suites you are running and put them under videos folder:


How to run only one test?

To run one test suite you can use the spec option:

make check-mobile spec=writer/apply_font_spec.js

This spec option can be used with check-mobile and check-desktop rules, depending on what kind of test you intend to run. This is the headless run, but in the interactive test runner, you also can do that by selecting one test suite from the list. With these options, you can run a test suite, but a test suite means more tests. If you would like to run only one test you need to combine a test suite run, with using only() method. You need to open the test suite file and add only() to the definition of the specific test case:

- it('Apply font name.', function() {
+ it.only('Apply font name.', function() {

So both the headless build and the interactive test runner will ignore any other tests in the same test suite. It's useful when somebody investigates why a specific test fails.


So that's it for now. I hope these pieces of information are useful for getting familiar with the new test framework. Fortunately, the test framework provides us nice tools to write new tests and check test failures. I'm still working on the test framework to customize it to our online environment. Hopefully, using this test framework will improve software quality in the long term.

No comments:

Post a Comment