Quantcast
Channel: froglogic
Viewing all articles
Browse latest Browse all 398

Monitoring long-running tests by intermediary logging of results

$
0
0

Motivation

The set of tests being run on an application tends to grow as more features are added, or more bugs are fixed. This comes at the cost of execution times: for example, a test suite that took 30 min could now take 2 hours. Since normal logging info/test results are not available until the end of the execution of a test case, this means that there can be a long wait for test results of certain long-running tests. This can have some impact on the workflow, especially if one only wants to see the status of a particular set of tests.

What we propose in this article is an example on how one can use Squish GUI Tester functions and scripting language features to log additional information which can be visible during the executions of a test suite.

Example

# -*- coding: utf-8 -*-

import codecs
import datetime


status_file_name = squishinfo.testCase + "/status.txt"


# Detach (i.e. potentially terminate) all AUTs at the end of a scenario
@OnScenarioEnd
def hook(context):
    # It is a bit wasteful to open the close the same log
    # file repeatedly here, could be improved:
    add_status_message("END SCENARIO: %s" % context.title)
    add_status_message("SCENARIO RESULT COUNTS:")
    for result_type in ["errors", "fails", "fatals", "passes", "testcases", "tests", "warnings", "xfails", "xpasses"]:
        add_status_message("  %s: %s" % (result_type, test.resultCount(result_type)))
    if context.userData["OnScenarioStart_fail_counter_total"] != (test.resultCount("errors") + test.resultCount("fails") + test.resultCount("fatals")):
        add_status_message("SCENARIO STATUS: FAIL or FATAL or ERROR")
    else:
        add_status_message("SCENARIO STATUS: SUCCESS")
    add_status_message("")
    add_status_message("")
    add_status_message("")

    for ctx in applicationContextList():
        ctx.detach()


@OnScenarioStart
def hook(context):
    add_status_message("START SCENARIO: %s" % context.title)
    context.userData = {}
    context.userData["OnScenarioStart_fail_counter_total"] = test.resultCount("errors") + test.resultCount("fails") + test.resultCount("fatals")

def add_status_message(message):
    global status_file_name
    timestamp = datetime.datetime.strftime(datetime.datetime.now(), "%Y-%m-%d %H:%M:%S.%f")[:-3]
    append_to_file_utf8(status_file_name, "%s %s" % (timestamp, message))


def append_to_file_utf8(file_name, data):
    f = codecs.open(file_name, "a", "utf8")
    f.write(data)
    f.write("\n")
    f.close()

This example is logging the status and the total number of results directly to a file each time a scenario is finished. In order to do this, we are using the OnScenarioStart and OnScenarioEnd hooks along with the test.resultCount() function from Squish.
Running a dummy BDD test case lead to the following file content:

2018-05-04 12:49:33.418 START SCENARIO: scenario1
2018-05-04 12:49:33.420 END SCENARIO: scenario1
2018-05-04 12:49:33.420 SCENARIO RESULT COUNTS:
2018-05-04 12:49:33.420   errors: 0
2018-05-04 12:49:33.420   fails: 0
2018-05-04 12:49:33.420   fatals: 0
2018-05-04 12:49:33.420   passes: 1
2018-05-04 12:49:33.420   testcases: 0
2018-05-04 12:49:33.420   tests: 1
2018-05-04 12:49:33.420   warnings: 0
2018-05-04 12:49:33.420   xfails: 0
2018-05-04 12:49:33.420   xpasses: 0
2018-05-04 12:49:33.420 SCENARIO STATUS: SUCCESS
2018-05-04 12:49:33.420 
2018-05-04 12:49:33.420 
2018-05-04 12:49:33.420 
2018-05-04 12:49:33.420 START SCENARIO: scenario2
2018-05-04 12:49:33.422 END SCENARIO: scenario2
2018-05-04 12:49:33.422 SCENARIO RESULT COUNTS:
2018-05-04 12:49:33.422   errors: 0
2018-05-04 12:49:33.422   fails: 1
2018-05-04 12:49:33.422   fatals: 0
2018-05-04 12:49:33.422   passes: 2
2018-05-04 12:49:33.422   testcases: 0
2018-05-04 12:49:33.422   tests: 3
2018-05-04 12:49:33.422   warnings: 0
2018-05-04 12:49:33.422   xfails: 0
2018-05-04 12:49:33.423   xpasses: 0
2018-05-04 12:49:33.423 SCENARIO STATUS: FAIL or FATAL or ERROR
2018-05-04 12:49:33.423 
2018-05-04 12:49:33.423 
2018-05-04 12:49:33.423 
2018-05-04 12:49:33.423 START SCENARIO: scenario3
2018-05-04 12:49:33.424 END SCENARIO: scenario3
2018-05-04 12:49:33.425 SCENARIO RESULT COUNTS:
2018-05-04 12:49:33.425   errors: 0
2018-05-04 12:49:33.425   fails: 2
2018-05-04 12:49:33.425   fatals: 0
2018-05-04 12:49:33.425   passes: 3
2018-05-04 12:49:33.425   testcases: 0
2018-05-04 12:49:33.425   tests: 5
2018-05-04 12:49:33.425   warnings: 0
2018-05-04 12:49:33.425   xfails: 0
2018-05-04 12:49:33.425   xpasses: 0
2018-05-04 12:49:33.425 SCENARIO STATUS: FAIL or FATAL or ERROR
2018-05-04 12:49:33.425 
2018-05-04 12:49:33.425 
2018-05-04 12:49:33.425

What’s next

We have presented a possible implementation for BDD test cases/suites, but the same could be done for normal test scripts using the functions init() and cleanup(). On the same note, we used Python but other scripting languages allow for the same kind of logging. It would also be possible to log more information about the test executions that should not appear in the report generated by Squish, some debugging output for example.

The post Monitoring long-running tests by intermediary logging of results appeared first on froglogic.


Viewing all articles
Browse latest Browse all 398

Trending Articles