Quantcast
Channel: froglogic
Viewing all 398 articles
Browse latest View live

Webinar Q&A: How to Get Faster, More Reliable Automated Testing Results Using Continuous Integration

$
0
0

With our friends from QASource, we recently hosted a webinar, “How to Get Faster, More Reliable Automated Testing Results Using Continuous Integration,” where we showed you how to integrate your Squish tests with the Jenkins Continuous Integration server for automatic execution of your GUI tests in a CI pipeline.

We received lots of great questions during the live webinar Q&A, but did not get to answer all of them. The following list comprises the most frequently asked questions and their answers.

Q: Which type of job is preferred for real-time implementation – freestyle or pipeline jobs?

We recommend pipeline jobs over freestyle jobs, as they offer greater flexibility and stability in complex workflows. More information on the advantages of pipeline workflows can be found here.

Q: Is the initiation of tests after a code check-in possible only with using Git, or can other versioning tools be used?

Yes, other Version Control Systems can be used, for example, Mercurial.

Q: Can we distribute test executions across various nodes through Jenkins?

Absolutely. One can make use of the ‘parallel’ function in the pipeline script to distribute tests across available nodes. The following pseudo-code runs two suites, “foo” and “bar” in parallel:

parallel {
     'suite_foo': node {
         squish([testsuite: 'suite_foo'])
     }
     'suite_bar': node {
         squish([testsuite: 'suite_bar'])
     }
}

Q: Is Travis also supported? Do you provide a plugin for Squish integration?

Currently froglogic does not offer a plugin for integration with the Travis CI system, however running tests in batch can be achieved through command line calls to squishrunner. We plan to offer a plugin to Travis in the future. 

Q: Can we upload a maintainable file to the Squish for Jenkins plugin which specifies the tests to be run?

Not directly – however this can be accomplished with Jenkins pipeline scripts. Jenkins provides functions for reading files, splitting strings and iterating lists. The Squish plugin, when used in a pipeline, is just a function call with arguments. That means it is possible to use a Jenkins function for reading a file, converting the contents into a list and passing that as the corresponding argument for the Squish plugin to run only the given list of test cases. 

In pseudo-code, this might look like:

def testcases = readFile("testCaseList.txt").split("\n").join(" ")
def squish result = squish( [
           testsuite: "suite_foo",
           testcase: testcases])

The same idea could be used to read the list of test suites.

Q: Due to company compliance and security reasons, we cannot use open source tools for our CI workflows. Which commercial tools do you support for setting up a CI pipeline with Squish?

Squish supports several CI tools, not just Jenkins. Some of these include TeamCity, Ant, Maven and Bamboo. For more information on Squish’s extensive integration options, visit the corresponding features page.

Q: What is the long-term viability of tests run in Jenkins? For example, if after a year or so there are new versions of Squish and Jenkins, do I need to update the tests?

We strive towards keeping existing tests running with new versions of Squish. Older tests, even more than a year old, should continue to work as expected. 

Q: For the pipeline script, can we configure it such that tests are skipped if there is a failure?

Within the Squish plugin, you can configure your job to report a build as unstable if there are test failures. If there is a failure, the remaining tests will execute. Optionally, you can abort a build if a test failure is detected. 

Q: We have setup Squish with Jenkins, and now we are planning to execute tests on different machines and browsers for comprehensive coverage. What approach would you use to set up such an environment?

You can configure Jenkins to execute the same test suite on multiple machines in parallel. Your setup can include using different browsers to achieve comprehensive coverage. This is possible all within a properly written pipeline script.

Q: The object attributes of my application change every time I run the automation suites of my application. How can Squish handle dynamic object attributes for every test execution?

Squish can handle dynamic object attributes using easy-to-define Wildcard or Regular Expression matching. With these tools, one can edit the Object Map entry for dynamic objects to allow for inexact matches in object names. More information can be found in our documentation.

The post Webinar Q&A: How to Get Faster, More Reliable Automated Testing Results Using Continuous Integration appeared first on froglogic.


Disable Logging of All But Some Messages

$
0
0

Motivation

Logging is a useful tool for testing. It can be used for different purposes throughout a test: outputting properties of a GUI object, tracking the current executed step, debugging which particular code path is being followed, and why, and much more. However, tests tend to grow either in length or number to cover more cases, verify bug fixes or other similar use cases, and this can lead to test output containing so much logged data that it can be hard to dissect. Using test.startSection()/endSection() can help to improve the readability, as can the use of testSettings.silentVerifications to some extent, but it would sometimes be preferable to log only certain specific log messages (think about debug for example).

What we propose in this article is an example that demonstrates how to modify Squish functions and in particular test.log() so only some messages get logged. While the snippet is in Python, the same could be done in other supported scripting languages.

An Example

def main():
    startApplication("addressbook")


    # Define alternative log() function:
    def test_log_only_prio1(msg, detail=None, priority=None):
        if priority is None or "PRIO1" not in priority:
            return
        if detail is None:
            test.log_original(msg)
        else:
            test.log_original(msg, detail)
    # Install alternative log() function:
    test.log_original = test.log
    test.log = test_log_only_prio1


    for i in range(10):
        test.log("Before action", priority=["PRIO1"])
        test.log("This is a debug log", priority=["DEBUG"])
        test.log("After action", priority=["PRIO1"])

The test result for this test case will not show any log of what we described as “DEBUG” logging. We could do the opposite and have a function that would log only “DEBUG” messages and so on, based on our needs.

What’s Next

This technique could be used in conjunction with the one presented in this other article in order to have, for example, real-time logging of debug information for long-running tests. We also made the example deliberately simpler by choosing to write which new log function to use, but it is possible to think about having this choice made dynamically based on some environment variable for example so that there is no need to change the test manually.

The post Disable Logging of All But Some Messages appeared first on froglogic.

Finding Table Cells by Header Text in Java Applications

$
0
0

In another article, we demonstrated how to find table cells by header text in Qt applications. Here, we will show how to do the same thing in a Java AUT (Application Under Test).

Motivation

Finding table cells by header text leads to more readable tests. Furthermore, this is useful if the table columns are not always in a fixed order. With the AddressBook examples, it is possible to reorder the columns, in which case, test cases that refer directly to column numbers may be fragile.

Working with Swing JTable

At record-time, clicking on a table header results in an object being named in the Object Map, of type TableHeaderItemProxy. Instances of this class contain a property called column. This property will change if the table column is reordered. This makes it easy to identify columns by header text. You can see something like the image below from the Squish Spy by picking on one of the table headers and inspecting its properties in the Properties view.

The Object Map entries for TableHeaderItemProxy look something like this:

surname_TableHeaderItemProxy = {"caption": "Surname", "type": "com.froglogic.squish.awt.TableHeaderItemProxy"}
phone_TableHeaderItemProxy = {"caption": "Phone", "type": "com.froglogic.squish.awt.TableHeaderItemProxy"}

Since the caption property was set to something meaningful for these objects, the caption gets picked up and used as part of the symbolic as well as the real names of each TableHeaderItemProxy. These object map entries will work even if the columns are reordered.

However, when recording interactions with the table cells, these TableHeaderItemProxy objects are not used. Instead, we see calls to mouseClick() on the values returned by waitForObjectItem(names.JTable, "x/y"), where the x,y values are fragile.

Given a header text, how can we obtain the column number? With the column property and the scripted Object Map, we can write a helper function that looks like this:

def columnNumber(columnText):
    columnHeaderViewName = {"caption": columnText, "type": "com.froglogic.squish.awt.TableHeaderItemProxy"}
    return waitForObject(columnHeaderViewName).column

Next, we can write a function called tableCell(), which uses this helper function and returns the desired cell, using waitForObjectItem().

def tableCell(columnName, rowNumber):
    colNum = columnNumber(columnName)
    return waitForObjectItem({"type": "javax.swing.JTable", "visible": True}, "{}/{}".format(rowNumber, colNum))

Here is an example test case that tests the functions against the Swing AddressBook example AUT.

def main():
    startApplication("AddressBookSwing.jar")
    activateItem(waitForObjectItem(names.address_Book_JMenuBar, "File"))
    activateItem(waitForObjectItem(names.file_JMenu, "Open…"))
    doubleClick(waitForObjectItem(names.open_JList, "MyAddresses.adr"), 78, 5, 0, Button.Button1)
    # some interactions where columns may be reordered...
    someTableCell = tableCell("Surname", 4)
    test.compare(someTableCell.text, "Boardus")

Working with SWT Table

In SWT, mouse move events on the TableHeader objects are not visible in the SWT event queue. This means the picker can’t pick them, and Squish can’t record interactions like drag and drop on them either. However, you can record a mouseClick() on them to add them to the Object Map, and with the use of Application Objects and tree navigation, it is always possible to find and inspect these objects in question.

The SWTTable has a columnorder property that can be used to determine the current order of the columns, in case they were reordered. To map from column text to current column number, we first need an array of column texts in the original order. This helper function below, getOriginalOrder(), generates that, and should be called once after the Tableis showing all of its columns in the original order.

originalColumns = []
def getOriginalOrder():
    global originalColumns
    originalColumns = []
    table = waitForObject({"type": "org.eclipse.swt.widgets.Table"})
    for i in range(0, table.getColumnCount()):
        tc = table.getColumn(i)
        originalColumns.append(tc.text)

Now we can implement SWT versions of columnNumber() andtableCell() that work like their Swing counterparts:

def columnNumber(columnName):
    global originalColumns
    origidx = originalColumns.index(columnName)
    table = waitForObject({"type": "org.eclipse.swt.widgets.Table"})
    return table.getColumnOrder().at(origidx)

def tableCell(columnName, rowNumber):
    colNum = columnNumber(columnName)
    return waitForObjectItem({"type": "org.eclipse.swt.widgets.Table", visible": True}, "{}/{}".format(rowNumber, colNum))

Conclusion

In the situation where columns are not in a fixed order (or even if they are fixed), using “named columns” can improve readability and stability in your test cases. This article gives you some ideas on how to achieve this in tests against your Java AUT.

The post Finding Table Cells by Header Text in Java Applications appeared first on froglogic.

Screenshots in Squish Reports: Simplifying Result Analysis

$
0
0

The Squish GUI Tester excels at verifying an application’s user interface. But comprehensive verifications can come at a cost: the resulting test reports become huge and daunting to analyze. Take advantage of additional screenshots in Squish reports to get a better understanding of what happened.

A picture is worth a thousand words

Squish is a professional tool for creating, running and maintaining GUI tests. The resulting test reports can be stored in a wide range of formats or post-processed by other tools. However, sometimes it can be hard to tell why a verification failed. Tests make it very clear that e.g. a button was disabled even though it should have been enabled. Why that is, is often not clear at all.

Watching Squish as it replays tests to determine what’s happening is often not viable:

  1. Tests execute steps at high speed. It can be hard to follow the sequence of steps visually.
  2. Most tests are often executed outside of working hours. Nightly test runs are very common.
  3. Last but not least, tests typically take a couple of minutes, minimum. Watching tests can be boring, and there may be better things to do!

Screenshots in Squish reports can help with this. By storing screenshots along with the test report data you get to see the state of the screen as it was at various moments during test execution. For example, the screen as it was when a verification failed.

Adding Screenshots to Test Reports

The simplest way to log a screenshot is by invoking the test.attachDesktopScreenshot function:

def main():
    startApplication("SquishAddressBook")
    test.attachDesktopScreenshot("Desktop after launching AUT")

The documentation explains:

This function will create a screenshot of the desktop from the system where the currently active AUT is running and store that in the test report directory. In addition, an entry is being logged in the test report indicating the path of the screenshot file as well as including the specified message.

This means that you can take screenshots at arbitrary points during the execution of a test case. This is especially useful if the application under test has some visible side effect on the desktop, such as

  • An external PDF viewer is opened to display an invoice
  • A new icon appears in the system tray
  • A message box (possibly an error) is shown by the operating system

Automatically Logging Screenshots

In addition to the test.attachDesktopScreenshot function, Squish also features three APIs to create screenshots for you automatically in different situations:

  1. testSettings.logScreenshotOnFail for logging a screenshot every time a verification fails. Imagine a sporadic test failure which cannot be explained. Maybe the overall state of the application under test is broken due to external factors? Additional information can be provided to the developers via visual inspection of the desktop. This can be extremely useful for diagnosing test failures. In many cases, a quick look at a screenshot can give a useful hint as to what caused a verification to fail.
  2. testSettings.logScreenshotOnError for logging a screenshot every time a script error occurs. This is invaluable for diagnosing inexplicable test errors. For example, the test execution may abort because the application under test vanished. A screenshot may show that the application restarted itself due to an update. Or clicking a button may fail – the screenshot might show that the button is obscured by a Windows message box asking to reboot the machine.
  3. testSettings.logScreenshotOnPass for logging a screenshot every time a verification passes. This can be useful for creating a ‘photo story’ of the test execution. Verifications are typically spread all over a test script – and most of them are passes. By logging screenshots on passes, the resulting test reports become larger but much more expressive.

Especially the first two settings, testSettings.logScreenshotOnFail and testSettings.logScreenshotOnError, are extremely useful for diagnosing test failures. Enabling these settings is typically a major improvement to the generated test reports.

These three are not functions like test.attachDesktopScreenshot(), however. Instead, they are properties assuming the values ‘true’ and ‘false’. By default, they are all ‘false’ but can be enabled in a test script using a script statement such as

testSettings.logScreenshotOnFail = True

Make sure to execute this script statement early on in your test script. That way, they are created automatically for all subsequent verifications.

Accessing Screenshots

All screenshots are stored on disk in the compact PNG format, along with the other test report data. There are two main ways to view the screenshots: from the Squish IDE, or by inspecting (resp. processing) the test report files.

Accessing Screenshots in the Squish IDE

Accessing screenshots in the Squish IDE is useful if you just finished executing a test. It’s also handy when loading a previously generated test report into the Squish IDE.

In the Squish IDE, screenshots generated by test.attachDesktopScreenshot() show up like this:

Results generated by test.attachDesktopScreenshot()
Results generated by test.attachDesktopScreenshot()

Double-click the line saying ‘Attachment’ (the last line in the above image) to open the generated screenshot.

When using any of the testSettings flags described above, the results look slightly different:

Results generated by 'testSettings.logScreenshotOnFail = True'
Results generated by ‘testSettings.logScreenshotOnFail = True’

In this case, double-click the line starting with ‘Desktop Screenshot’ to open the screenshot.

Accessing Screenshots In Squish Reports

Squish supports generating test reports in various formats. See the documentation of the squishrunner for a full list of supported formats. The appearance of screenshots in Squish reports depends on the format used by the report.

In HTML reports, screenshots are easily accessible in a web browser. To avoid the report becoming too large, screenshots are only shown when requesting them. To do so, click the little icon next to the ‘Comparison’ message in the string:

Squish HTML Report with screenshot shown
Squish HTML Report with screenshot shown

Other report formats which are meant for post-processing by other tools reference screenshots via the path on the file system. For example, Squish XML3 reports use this:

<verification>
    <location>
        <uri><![CDATA[x-testcase:/test.py]]></uri>
        <lineNo><![CDATA[5]]></lineNo>
    </location>
    <scriptedVerificationResult type="FAIL" time="2019-09-02T14:23:29+02:00">
        <scriptedLocation>
            <uri><![CDATA[x-testcase:/test.py]]></uri>
            <lineNo><![CDATA[5]]></lineNo>
        </scriptedLocation>
        <text><![CDATA[Comparison (Screenshot in "/tmp/reportsml3/suite_screenshots/tst_case1/failedImages/failed_1.png")]]></text>
        <detail><![CDATA['Address Book - Untitled' and 'Apple' are not equal]]></detail>
        <screenshot>
            <uri><![CDATA[x-results:/suite_screenshots/tst_case1/failedImages/failed_1.png]]></uri>
        </screenshot>
    </scriptedVerificationResult>
</verification>

That way, screenshots generated as part of Squish test executions are always easily available.

The post Screenshots in Squish Reports: Simplifying Result Analysis appeared first on froglogic.

GUI Testing Through Optical Character Recognition in the Cloud

$
0
0

Robust and reliable GUI tests are essential for the continuous deployment of today’s software projects. The Squish GUI Tester supports you in achieving this goal through dedicated Squish editions, which are aware of various GUI toolkits and technologies. Image-based Testing is included as a complementary feature for dynamically rendered visualizations. Since Squish 6.5, these test automation approaches are joined by AI-driven Optical Character Recognition (OCR). This article focuses on the configuration of Squish needed to benefit from AI-driven Optical Character Recognition (OCR) Cloud Services.

Optical Character Recognition (OCR)

You might find yourself in a situation where text should be recognized but dedicated GUI object recognition does not do the job. Custom rendering or usage of unsupported third-party toolkits could be the reason. You could fall back to the image recognition approach in these cases. This requires you to create a search image and the identification would depend on the text rendering. This can be error-prone for rendered text due to anti-aliasing, scaling or font substitution.

Luckily, the Squish GUI Tester introduced Optical Character Recognition as part of its 6.5 release. This feature enables you to recognize text on screen, no matter how it is rendered.

Tesseract is the OCR engine that is currently supported by Squish for local installation. But you might want to leverage a cloud service for your enterprise needs. Amazon Rekognition is probably the most popular cloud OCR service. It is really easy to integrate with Squish as we’ll show below. Squish also supports the OCR.Space API as an alternative cloud service, as you will see afterwards.

Amazon Rekognition OCR Cloud Service

The following steps have to be done to enable Squish Optical Character Recognition through Amazon Rekognition:

  1. AWS account Setup

Setup an AWS account, which includes free image recognition for 6000 images per month within the first 12 months.

  1. Create an IAM User and AWS access key

Authentication for the AWS cloud service access is done through an access key. This key will be generated as part of the IAM user setup with AWS administrator permissions. Make sure to download the CSV file with the access key when creating your IAM user. Forgot to do this? Simply create a new key through the IAM Console.

The Squish GUI Tester already includes the glue to connect to the cloud service. There is no need to install any SDK as suggested by the AWS documentation.

  1. Configure Squish to use Amazon Rekognition

Finally, open the Squish Preferences (Edit|Preferences) and navigate to the Squish OCR engine settings. Switch the OCR Engine to Amazon Rekognition and enter the key ID (API key ID) and access key (API key) as well as the preferred Region of the Amazon datacenter. Once you have applied these changes, Squish will use Amazon‘s cloud service to identify any text within your test without any further configuration. You can easily switch to a different OCR engine at a later point without changing the test itself.

Squish OCR engine settings

OCR.Space Cloud Service

OCR.Space is another Optical Character Recognition cloud service offering. It includes 25 000 calls per month for free and there are commercial plans which include even more calls. If you prefer a self-hosted solution, don’t miss their OCR.Space Local on-premise server option.

The configuration is straightforward:

  1. Request an OCR.Space API key

Request an OCR.Space API key in a few steps by creating an account.

  1. Configure Squish to use the OCR.Space cloud service

As for Amazon Rekognition, simply open the Squish Preferences (Edit|Preferences) and navigate to the Squish OCR settings to enter the API key you received by email. Make sure to switch the OCR Engine setting to OCR.Space.

Looking for a different OCR Engine?

The OCR support in Squish has been designed to support various OCR engines. Tesseract, Amazon Rekognition and OCR.Space are the first supported engines. If you are missing any specific engine, don’t hesitate to let us know. We would be pleased to look into integrating your preferred engine with Squish.

Automate Tests Based on OCR and Image Recognition

Our documentation covers the topic of test automation with OCR and Image Recognition in detail. And don’t miss the recordings of our release webinars on OCR and Image Recognition for a quick feature demo:

The post GUI Testing Through Optical Character Recognition in the Cloud appeared first on froglogic.

Detecting Problematic Object Names Automatically

$
0
0

Not all object names are equal. Some are “less good” than others, for various reasons. To avoid proceeding to use problematic object names, it can be useful to make the test script developers aware of them automatically, as early as possible.

In our Knowledge Base article Example – Custom Object Map Sanity Checks we demonstrate how this problem could be solved. Below you will find some additional information for the solution presented in the Knowledge Base article.

Squish Object Names

To understand how such automatic checks can be implemented in Squish, some background information on object names in Squish is required:

Squish identifies GUI objects (that it should interact with) by object names. These object names are stored in the so-called Object Map.

Since Squish 6.3, the Object Map is a script file (in the same scripting language as the test suite it belongs to), and the object names are variables defined in this script file.

The values of these (object name) variables are dictionaries in which the keys are the names of the properties of the desired GUI object. And their values represent the values of each respective property of the desired GUI object. This Python example from a simple object name file demonstrates this:

address_Book_MainWindow = {"type": "MainWindow", "unnamed": 1, "visible": 1, "windowTitle": "Address Book"}

Here the (only) object name is “address_Book_MainWindow”, and it identifies an object of type “MainWindow” which, among others, has a property “windowTitle” with the value “Address Book”.

Reporting Possible Problems

To implement checks for the object names, it is required to iterate over these object names, get their values and then perform the desired checks on these object names. Once that has been done, it is time to consider how the problems should be reported to the user.

Here is an example that shows how reporting the problems could look like in the Test Results view, directly after executing the test case/suite:

Problematic object names reported as script runtime errors

The first section contains details about the object names that unexpectedly contain the “occurrence” property:

Details about object names that unexpectedly contain the “occurrence” property

The second section contains details about unexpected top-level object names:

Details about unexpected top level object names

To Force or Not to Force

When using such checks, one must consider whether simply to report the problems upon test case/suite execution, or to abort script execution.

For existing test suites it may be difficult to fix all object name problems in one step. In such cases opting for merely reporting the problem could make sense. And yet, the warnings in the Test Results would make it difficult to ignore the problems in the long run.

Example Implementation

An example implementation of the above mentioned two types of checks can be found in the article Example – Custom Object Map Sanity Checks in our Knowledge Base (kb.froglogic.com).

The post Detecting Problematic Object Names Automatically appeared first on froglogic.

Control of Sub-processes During GUI Testing

$
0
0

Testing GUI software can often start sub-processes. These can be either background services which are not directly visible to the user, or they can show an additional UI. The Squish GUI Tester allows the user to control which of the sub-processes will be hooked and which are to be ignored.

Squish for Qt

When a Qt Application Under Test (AUT) is started using the startApplication() script API, Squish for Qt hooks only a single process by default. If the tested application uses a sub-process showing a GUI, you can enable the relevant option on the test suite settings page in the Squish IDE. This will make the additional GUI accessible to Squish.

With the above option enabled on Linux or macOS systems, Squish for Qt attempts to hook every sub-process spawned by the AUT. In case the sub-application does not show a GUI or uses an unsupported toolkit, loading the Squish hook into it is superfluous and may cause some unwanted side effects. In most cases it will be as benign as slightly increased memory consumption, but sometimes may include performance problems, unusual application behavior or even application crashes. Applications that are most exposed to such issues are:

  • Applications that use a different version of the Qt library than the main AUT;
  • Terminal applications that use only the core part of the Qt library and don’t initialize the GUI-related modules.

Regardless of the side effects, it is best to limit Squish for Qt hooking to affect only the required applications. Squish for Qt already blacklists a number of common UNIX utilities like bash, gzip, tar or hostname. Squish ignores such applications entirely. If your AUT starts any sub-processes that are erroneously hooked, you can add their executable names to the etc/ignoredauts.txt file in the Squish for Qt installation. The name should include the full filename without its filesystem path. Such processes will be ignored by Squish as well.

# example ignoredauts.txt file
ping
perl
my_aut_loader

Ignoring certain applications may be useful even when the sub-process hooking is disabled. In such configurations, Squish for Qt waits for a single connection from a hooked AUT. Once that happens, it ceases to hook further processes. However, some software requires a startup script or an additional application that runs prior to the main process and is responsible for starting it. Such an application may even include a simple GUI, i.e. a splash screen. In order to avoid mistaking such processes for the main AUT, the name of the script interpreter or the startup executable file name should be added to the etc/ignoredauts.txt as well.

Squish for Qt on Windows

Due to limitations of the Windows operating system, automatic hooking of all the Qt sub-processes is not possible. Please follow the Squish manual to hook additional Windows applications using the dllpreload executable. Applying the treatment described in the manual selectively allows a large degree of control over the hooked processes.

Squish for Windows

Squish for Windows always hooks all the sub-processes of the AUT. Due to the non-invasive nature of the application wrapper, it should not cause any unwanted interference. However, processes that are irrelevant to the test may still clutter the object tree and litter the recorded test cases with unnecessary test instructions. Such processes can be ignored by adding their executable names to the etc/winwrapper.ini file under the Blacklisted Processes option.

[...]
# The Blacklisted Processes key is used to identify executables
# that the Windows wrapper should not attempt to hook up (for
# recording or playback). The matching is done on the filename of
# the executable (but case-insensitively).
#
# Example:
#
#   Blacklisted Processes="notepad.exe","mspaint.exe"
Blacklisted Processes="perl.exe","my_aut_loader.exe"

Conclusion

Precise control over the Squish hook can help avoid many problems. It keeps the test suites clean of unexpected and unwanted object names and test instructions. It makes navigating the Application Objects Tree easier. And in rare cases it prevents Squish from interfering with the tested software. Test writers should always strive to limit the scope of the Squish hook to the required minimum.

The post Control of Sub-processes During GUI Testing appeared first on froglogic.

External Resources for BDD Through Data Tables

$
0
0

You can use so-called data tables to execute a specific Behavior-Driven Development (BDD) GUI test scenario in a data-driven way. Create tables to define test data and use this data to drive your test. Since Squish 6.5.0, it is possible to refer an external file for such data.

Data tables as shown below work with all Squish editions and applications. In this blog, we’re going to a use a Squish for Windows package, the WPF addressbook example application and JavaScript as the scripting language to demonstrate this new feature.

We’ve created a short test script which performs an action for each row of data. In this case, we add a person to the addressbook with the given data from the .tsv file. Note: this could also be a .csv, .txt or .xls file. For more details, please take a look at our documentation on Behavior-Driven Testing.

The step implementation looks like this:

import * as names from 'names.js';

Given("AddressbookWPF is up and running", function(context) {
    startApplication("AddressbookWPF")
});

When("I create a new addressbook", function(context) {
    mouseClick(waitForObject(names.fileMenuItem));
    mouseClick(waitForObject(names.fileNewMenuItem));
});

Then("'|integer|' entries are present", function(context, rowNumber) {
    test.compare(waitForObject(names.addressBookUnnamedTable).rowCount, 
        rowNumber);
});

When("I add persons from a file", function(context) {
    var table = context.table;
    for (var i = 1; i < table.length; ++i) {
        var forename = table[i][0];
        var surname = table[i][1];
        var email = table[i][2];
        var phone = table[i][3];

        clickButton(waitForObject(names.addButton));
        type(waitForObject(names.addressBookAddForenameEdit), forename)
        type(waitForObject(names.addressBookAddSurnameEdit), surname)
        type(waitForObject(names.addressBookAddEmailEdit), email)
        type(waitForObject(names.addressBookAddPhoneEdit), phone)
        type(waitForObject(names.addressBookAddPhoneEdit), "<Return>");
        test.log("Added '" + forename + " " + surname + " " +  
            email + " " + phone + "' to the addressbook")
    }
});

The sample data file is stored directly in the Test Data folder of the test suite:

Once the test is executed, the following output will be visible in the test results:

Data-driven BDD tests using external data tables is a useful method of GUI testing, especially in cases where input data consistently grows. Readability of your tests cases is also improved, an important factor when tests are shared amongst project stakeholders.

You can download the used test suite below:

The post External Resources for BDD Through Data Tables appeared first on froglogic.


Automating Accessibility Testing: How to Check for Sane Tab Order

$
0
0

Ensuring sane tab ordering is part of motor and dexterity impairment accessibility, as it makes sure that alternate UI navigation via the keyboard is not only possible, but also logically comprehensible. The following example shows how you can utilize Squish to automate accessibility testing.

Tab Order Sanity and Completeness

This article (part of the University of Washington’s IT Accessibility Checklist) nicely outlines the requirements for proper tab ordering. “Users […] expect to move sequentially from left to right and top to bottom through the focusable elements“, which can be translated into onscreen coordinate checks of focusable widgets.

We are going to use the Qt paymentform example included in the Squish installation as our Application Under Test (AUT). After creating a new test suite and test case, click record, navigate to the first element of the tab order you want to test, and click it. In this example, select the invoice combo box and Squish will return a test script similar to this:

import names

def main():
    startApplication("paymentform")
    mouseClick(waitForObject(names.invoice_QComboBox), ...)

We will come back to that later.

Let’s create a function for our check and name it check_taborder_sanity. It will only receive an optional up_threshold parameter to allow some space for widgets to go against the top to bottom movement and will go through every widget in the tab order and compare its top left origin with that of the last widget. Finally, True is returned if all of the widgets are placed according to the rules stated above and False otherwise.

def check_taborder_sanity(up_threshold = 0):
    first = QApplication.focusWidget()
    current, bbox, minimum = taborder_next(first)
    while current != first:
        if bbox.y < minimum.y - up_threshold:
            return False
        elif bbox.x < minimum.x and bbox.y - minimum.y <= 0:
            return False
        current, bbox, minimum = taborder_next(current)
    return True

At the beginning, we retrieve the currently focused element with QApplication.focusWidget() and store it in the variable first and then get the next in the order by calling taborder_next (which is explained below). This gives us the next widget current, its screen coordinates bbox and the screen coordinates of the previous widget that act as the minimum that is tested against. By comparing the object references of current and first, we can check whether the loop was completed as we go through the tab order. We then compare the origins of both the current and the previous widget in order to allow only that:

  • The current widget is either below or to the right of previous one, unless:
  • It moves both down and left at the same time (following a downward z-stroke path).

The taborder_next function takes the current widget, performs the Tab key press on it and returns the next widget, its bounding box and the bounding box of the previous widget:

def taborder_next(from_obj):
    type(from_obj, "<Tab>")
    next_obj = QApplication.focusWidget()
    return (next_obj,
            object.globalBounds(next_obj),
            object.globalBounds(from_obj))

After that, let’s overwrite the recorded main function with a call to our new tab order sanity check function:

import names

def main():
    startApplication("paymentform")
    mouseClick(waitForObject(names.invoice_QComboBox))
    mouseClick(waitForObjectItem(names.invoice_QComboBox, "AXV-12000"))
    if check_taborder_sanity():
        test.passes("Tab order coordinates are sane")
    else:
        test.fail("Tab order coordinates are not sane")

One last thing before we are finished: interacting with the initial element might block us from proceeding in the intended tab order. In this example, clicking on the invoice combo box opens the respective dropdown list. It is easy to fix by simply selecting an item from the list, but we have to be aware that this might happen.

At last! Running this test should result in a pass with a "Tab order coordinates are sane" message!

Optional

Here are some checks that could be added to the above test:

  • Is the focused object visible as such? (See Screenshot Verifications)
  • Are all of our focusable widgets part of the tab order?

Wrap up

Let’s go over the tools used to create the above test:

  • We used GUI toolkit script bindings for Qt to retrieve the focused widget with QApplication.focusWidget(),
  • We used type(..., "<Tab>") (documentation) to navigate through our tab order, and,
  • We computed screen coordinates for widgets with object.globalBounds(my_obj) (documentation).

While some accessibility requirements will still require a certain level of manual testing (e.g. photosensitive content/seizure prevention), the Squish GUI introspection capabilities give you the tools to increase your automation coverage. Remember that improving accessibility not only benefits those with disabilities, but are nice quality of life improvements for any user.

The post Automating Accessibility Testing: How to Check for Sane Tab Order appeared first on froglogic.

Screenshot Verification Command Line Tools

$
0
0

We are all aware of how to view differences between the screenshot and the failed image when using the IDE to execute your tests. But do you know how to view the differences when using the command prompt to execute the tests or how to extract an image from a Verification Point?
Squish’s vpdiff and the convertvp tools can come in handy in such situations either to debug or to extract information which can be sent to the Squish support team for further investigation when the need arises.

Here is how vpdiff can be invoked from the command prompt to view the differences:

<SQUISH_DIR>/bin/vpdiff vpfile objectName [imagefile] [output [--highlights]]

Where the first parameter is the VP file, the second is the object in the VP file (you will find this inside the VP file against the “object” parameter) and the third is the path to the failed image.

For example:

C:\Squish\bin>vpdiff "C:\Users\TestSuites\suite_Py_Qt\tst_case11\verificationPoints\VP3" ":Address Book_MainWindow" "C:\Users\TestSuites\suite_Py_Qt\tst_case11\verificationPoints\failedImages\failed_1.png"

When the above command is executed, the Screenshot Verification Point dialog will appear, and the differences of the screenshots will be marked by red borders as shown below:

In case the --highlights option is provided, then an image will be created highlighting the differences.

For example:

C:\Squish\bin>vpdiff "C:\Users\TestSuites\suite_Py_Qt\tst_case11\verificationPoints\VP3" ":Address Book_MainWindow" "C:\Users\TestSuites\suite_Py_Qt\tst_case11\verificationPoints\failedImages\failed_1.png" "diff_img.png" --highlights

The --fromvp option of the convertvp tool can be used to extract an image from the VP file.

<SQUISH_DIR>/bin/convertvp --fromvp vpfile outdir [--export-masks]

For example:

C:\Squish\bin>convertvp --fromvp "C:\Users\TestSuites\suite_Py_Qt\tst_case11\verificationPoints\VP3" "C:\Users\TestSuites\suite_Py_Qt\tst_case11\verificationPoints"

The above command extracts the image in the VP3 file and saves it at the specified path with the name img_1.png.

Similarly, the --tovp option converts screenshots to VPs

<SQUISH_DIR>/bin/convertvp --tovp vpfile image objectname

For example:

C:\Squish\bin>convertvp --tovp "C:\Users\TestSuites\suite_Py_Qt\tst_case11\verificationPoints\VP4" "C:\Users\TestSuites\suite_Py_Qt\tst_case11\verificationPoints\image_1.png" ":Address Book_MainWindow"

The above command will convert the screenshot (image_1.png) to a Verification Point file called VP4.

The post Screenshot Verification Command Line Tools appeared first on froglogic.

Obtaining Code Coverage Data for a .NET Core Application on Linux

$
0
0

New to Coco v4.3.3 is the ability to instrument a .NET Core application on Linux, for use in obtaining code coverage metrics for such applications. This blog will walk you through how to instrument a simple .NET Core application written in C# and built on Linux. We will then demonstrate using Squish Coco to analyze the code coverage data for the application. 

A Simple .NET Program

We’ll use Microsoft’s “Hello World!” tutorial as our application. The program prints the statement “Hello World!” and the current time to the console. Prior installation of the .NET SDK is a requirement for this tutorial, as is a Squish Coco installation in the root directory.

The program file consists of the following code: 

using System;

namespace myApp
{
      class Program
      {
           static void Main(string[] args)
           {
                Console.WriteLine("Hello World!");
                Console.WriteLine("The current time is " + DateTime.Now);
           }
      }
}

To run the program, issue the following in a terminal:

$ dotnet run

If the program ran successfully, the console should return:

Hello World!
The current time is 12/3/2019 4:16:25 PM

Instrumenting the Application

There are two methods to instrumentation, one method most suitable to those engineers who typically do not have access to the project files and source (e.g., a test or QA engineer), and a second method most suitable to those who develop the code. We will explore both methods.

Method 1: No Access to Source Files

First, clean the project directory:

$ dotnet clean

Set an environment variable to activate the code coverage:

$ export COVERAGESCANNER_ARGS=--cs-on

Then, rebuild the project:

$ dotnet build

Finally, run the project:

$ dotnet run

A *.csexe and *.csmes file will generate. Open these in the CoverageBrowser:

$ coveragebrowser -m ./obj/Debug/netcoreapp3.0/myApp.dll.csmes -e myApp.dll.csexe

The reported code coverage is 100%, as shown in the image above.

Method 2: Access to the Project Files

In this method, we will edit the *.csproj file delivered with the tutorial to include a statement to activate the code coverage.

Open myApp.csproj, and add the following line within the PropertyGroup bracket:

<DefineConstants>COVERAGESCANNER_COVERAGE_ON</DefineConstants>

The complete myApp.csproj file should look like the following:

<Project Sdk="Microsoft.NET.Sdk">

   <PropertyGroup>
       <OutputType>Exe</OutputType>
       <TargetFramework>netcoreapp3.0</TargetFramework>
       <DefineConstants>COVERAGESCANNER_COVERAGE_ON</DefineConstants>
   </PropertyGroup>
</Project>

Save the file. Next, as in Method 1, clean, build, and run the project:

$ dotnet clean
$ dotnet build
$ dotnet run

Similar to Method 1, we can open the coverage results in the CoverageBrowser:

$ coveragebrowser -m ./obj/Debug/netcoreapp3.0/myApp.dll.csmes -e myApp.dll.csexe

Again, we see 100% statement coverage.

Summary

The above methods demonstrate instrumenting a simple .NET Core project on Linux, for obtaining code coverage data for the program using Squish Coco. Note that no changes to the actual source code were required, only minor modifications to the .csproj file, and in the case of Method 1, no changes to the project were required at all.

The post Obtaining Code Coverage Data for a .NET Core Application on Linux appeared first on froglogic.

Running Automated GUI Tests with Azure DevOps

$
0
0

We understand that more and more applications require a complex, powerful infrastructure that not always can be provided in-house. In such cases, cloud-based solutions for building and running applications are often selected. Azure Services is one such solution.

Automated GUI tests developed with the Squish GUI Tester can be easily executed with Azure DevOps on Azure Virtual Machines.

As it is required for the Squish test to be executed successfully, a Virtual Machine should provide some kind of display (e.g., by the active RDP connection).

Once the test environment is ready, the Azure DevOps can be used to configure, schedule and execute Squish tests. Just add a command line task to the Pipeline, and fill it with a script you would use during command line execution.

For the execution, a proper Squish package should be installed on the target environment. It can be done in advance by launching a Squish installer on the Virtual Machine or by performing the unattended installation as a part of the defined DevOps Pipeline.

The execution can be started manually or automatically by the trigger defined in the Pipeline.

After the execution, generated JUnit reports can be published and reviewed directly from the DevOps pages. Squish HTML reports can be archived, e.g., on the DevOps Shared Storage

To get more information on that and to get familiar with the example configuration that can be used to integrate Squish and Azure, we encourage you to check our documentation.

The post Running Automated GUI Tests with Azure DevOps appeared first on froglogic.

Screenshot Verification of a Button in a Pressed State

$
0
0

Standard buttons do not need to have their visual appearance tested as this is typically done by the vendor of the button control. A custom button control, however, will need to have its visual appearance tested in the various possible states of the button (enabled, disabled, mouse cursor not hovering over it, mouse cursor hovering over it). With Squish, to verify the visual appearance, Screenshot Verification Points (VPs) can be used.

The Problem

Creating Screenshot VPs for controls in a pressed state is not easily possible, because once the mouse button that is pressed on the button gets released again, the button changes back to the unpressed state and original visual appearance. If you check the box in the ‘Application Objects’ view to create a Screenshot VP for the button, there is not enough time to go back and press the button before the screenshot for the Verification Point gets taken and the screenshot is taken for the unpressed state.

A Script-based Solution

Here is how you can work around this issue of creating screenshot VPs for controls in their pressed state:

First, record your test script and create a Screenshot Verification Point as you normally would for the button to be tested.

Your recorded test script will look something like this:

import * as names from 'names.js';
function main() {
     startApplication("addressbook");
     activateItem(waitForObjectItem(names.addressBookQMenuBar, "File"));
     activateItem(waitForObjectItem(names.addressBookFileQMenu, "New"));
     //screenshot verification point for the button    
     test.vp("VP1");
}

Next, modify your test script to put the Verification Point statement testp.vp() between the mousePress() and mouseRelease() statements. Because these statements are not recorded, you can simulate the button pressed action by manually editing the recorded test script to contain these statements.

Your modified test script should look something like this:

import * as names from 'names.js';
function main() {
     startApplication("addressbook");
     activateItem(waitForObjectItem(names.addressBookQMenuBar, "File"));
     activateItem(waitForObjectItem(names.addressBookFileQMenu, "New"));

     //get the object reference of the button 
     add_button = waitForObject(names.addressBookUnnamedAddQToolButton)

     //Insert test.vp() betweem mousePress() and mouseRelease() statements
     mousePress(add_button)
     test.vp("VP1");
     mouseRelease(add_button)
}

Now, execute your test script. The verification will fail, and an error will be thrown. (In the test script, the screenshot is taken after the mouse is pressed, and therefore the image taken is that of the pressed button.)

This image is compared against the original image in in the Verification Point which was created for the unpressed state. Right click on the thrown error and, from the context menu, select the option “Use As Expected Result”. This will replace the original unpressed button image in the VP with the pressed button image and will be used for comparison for all future test runs.

The post Screenshot Verification of a Button in a Pressed State appeared first on froglogic.

Customizing Which Tests are Executed with the Squish Jenkins Plugin

$
0
0

Automated GUI tests created with Squish are organized within test suites. The criteria for grouping test cases might differ depending on the company organization, the complexity of the Application Under Test (AUT), or for any other reason.

Squish offers various execution modes from single test execution to tagged executions. Thanks to that, the user is not limited to executing whole test suites but can customize the set of tests executed.

The Squish Jenkins plugin supports these execution modes and provides a few additional features. Below you will find some useful examples that can be applied not only to your Jenkins Freestyle projects but to Jenkins Pipeline jobs as well.

Running Multiple Test Suites

For those cases where multiple test suites should be executed, list them in separate lines in the “Test suites” configuration field:

multiple test suites

Temporarily Excluding a Test Suite

If some of the listed test suites should not be executed, they can be commented out with the # character:

exclude test suite

Running All Test Suites from a Given Location

Instead of providing a path to a single test suite directory, use a path to the parent directory and add “/*” or “\*” at the end. The test suite directory names should start with the suite_ prefix:

Running Only Selected Test Cases

To execute only certain test cases from a test suite, list the test case names in the Test cases configuration field. Test cases should be separated with a whitespace:

Skipping Selected Test Cases

To execute all but certain test cases from a test suite, list the test cases to skip in the Test cases configuration field and check the Skip test cases option. Test cases should be separated with a whitespace:

Tagged Execution

We often recommend tagging test cases. This not only adds useful meta information to your tests, but allows selecting and executing single test cases from multiple test suites and locations. To learn how to tag your test cases, we recommend visiting this article. The tagged execution can be accomplished with the Extra options field.

The --tags option can be used multiple times in different configurations. For more details, we recommend getting familiar with this table.

Summary

Continuous automation using the Jenkins CI server helps streamline your automated GUI testing workflow. The Squish Jenkins plugin offers custom execution modes so that you can run only the tests required at a given time. Want to get started using Jenkins to launch your Squish tests? Visit our step-by-step guide.

The post Customizing Which Tests are Executed with the Squish Jenkins Plugin appeared first on froglogic.

Integrate Squish Test Center with Test & Requirements Management Systems

$
0
0

Integrations for Squish Test Center enable you to achieve traceability between test results stored in Squish Test Center and tests or requirements stored in 3rd-party management systems. These integrations also enable you to transfer and synchronize your test results between Squish Test Center and the supported 3rd-party management system. This enables you to view your test results within the supported management system.

Squish Test Center currently offers support for integrations with TestRail, JIRA, Zephyr, Xray and QAComplete, with more integrations to be added in the future. If your test or requirements management tool is missing, let us know in the comments section or via support@froglogic.com.

Core Features

Traceability

Traceability essentially means being able to jump easily from one system to another by providing links between the two systems. To achieve that, Squish Test Center offers the Traceability View. This view enables you to see a list of the requirements or tests managed by the 3rd-party management system and the associated tests managed by Squish Test Center. You can use the Traceability View to get an overview of the tests and requirements coverage. Furthermore, the view provides links to the integrated system and provides a Requirements Traceability Matrix Export (RTM-Export).

Squish Test Center’s Traceability View: Requirements coverage for requirements pulled from JIRA.

Additionally, Squish Test Center can also add backlinks to the associated items within the integrated test or requirements management tool. This allows you to jump easily to the latest test results from your 3rd-party management tool. The backlinks are added as custom fields. For integrations where that is not possible, the backlink is appended to the item description.

Synchronizing Results

Another of Squish Test Center’s core features is result synchronization. Squish Test Center can push the most recent test results for all associated tests and requirements to the 3rd-party management system. Since the feature set of the supported systems isn’t identical, pushing results differs slightly between implementations. For all management systems that support setting a test result (TestRail, Zephyr, Xray, QAComplete), we simply set the result. For Zephyr and Xray, we can also set a specific Release and Test Cycle, while for QAComplete we can set a Release and Configuration. For JIRA, we add the test status to the description and have the ability to open or close issues depending on whether the linked tests have passed or failed.

Issue or Test Creation

For JIRA, Zephyr and Xray, Issues or Test Items can be created directly from the Squish Test Center result view. If a test starts failing, this can be used to create an Issue ticket in the external system, so that the issue with the test can be tracked.

This feature can also be used to synchronize the test items managed by Squish Test Center with the test items managed by the 3rd-party test management system.

Zephyr Test Creation from Squish Test Center.

How the Integrations Work

Setup the Connection

To establish the connection between the test or requirements management tool, the integration needs to be activated in the Global Settings menu. Also, the server address and the authentication details need to be specified:

Squish Test Center Integration Configuration.

Establish Mapping

To be able to transfer your test results to an external system and to achieve traceability, the tests uploaded to Squish Test Center need to be linked to the requirements, issues or tests of the 3rd-party management system.

This is handled by the Traceability Mapping which is visualized at the center of the conceptual graphic below:

Conceptual Overview of Squish Test Center Integrations.

For the traceability mapping, the requirements, issues or test items are pulled from the 3rd-party system and stored in the Squish Test Center database. Once these are known to Squish Test Center, they can be mapped to the tests managed by Squish Test Center.

Squish Test Center offers some limited automatic mapping functionality that uses either the name of the 3rd-party item and matches it to the test item managed by Squish Test Center, or that reads the name of the associated test item from a custom field in the 3rd-party management tool. In most cases, however, we expect the names not to match and that there are no custom fields. For those cases, the mapping can be customized manually. Establishing the initial mapping will require some work, but we expect the mapping to require only moderate maintenance afterwards.

The fully custom manual mapping can be easily changed from the Traceability View:

Traceability Mapping Dialog.

Automate Result Synchronization

After the mapping has been established, you can start to push test results to your test or requirements management system. You can do this manually from Squish Test Center’s Traceability View, where you can push results or pull in tests or requirements at any time. But you can also automate pushing and pulling using the testcentercmd utility which comes with Squish Test Center. Using the testcentercmd utility, you can make pushing and pulling part of your test automation, and push results whenever new test results have been uploaded to Squish Test Center.

These features will guarantee a seamless integration with your existing test automation and reporting infrastructure.

The post Integrate Squish Test Center with Test & Requirements Management Systems appeared first on froglogic.


How to Locate an Object on a Map

$
0
0

Motivation

Consider an application that features a search resulting in markers being shown on a map. We want to verify that an expected number of occurrences of a marker, a known image, is shown. Broadly speaking, we want to count how often a particular image is part of a bigger image.

test.imagePresent()

A way to tackle this is by utilizing a Squish API that lets us search for images in our AUT, the Google Maps website in this case. The verification function test.imagePresent() is one of them. The function simply logs a pass/fail depending on whether a given image is found. But this might not be flexible enough. In our case it is not.

waitForImage() and findImage()

For more flexibility, there are waitForImage() and findImage(). If in doubt, stick with the former: it gives the image in question some time to become visible.

The image we want to search through is referenced via an object name and needs to be extracted and stored in our test suite. To extract it, or, in other words, to take the marker from the rendered map, we use our favorite image editing tool to remove everything around the marker. The area around the marker should become transparent instead. This way, Squish ignores these extraneous parts when searching for the marker on the map.

Still, there are two things to watch out for. First, we should ensure that the alpha-transparent edge of the marker is removed as well if we aim for pixel-perfect matches. Second, we should be mindful that the markers contain the single characters A, B, etc. on top of them.

For the first consideration, we could accept the existence of small differences between the images by enabling tolerant search mode, a feature all image lookup APIs support. In case the default setting is not tolerant enough, we could override the default threshold for how much difference is accepted:

waitForImage("map_marker.png", {tolerant: 1, threshold: 0.95});

For the second consideration, where we want to exclude the alphabet characters, we again use our favorite image editing tool and remove the area that holds the character from our image and set it to transparent, to avoid it being used for detecting the marker on the map.

Left: Marker to be found. Right: Masked marker to improve search.

Solution

With these preparations, we can develop a Squish test that searches for the marker on the given image. By default it will return the first occurrence only. The search can be customized using the ‘occurrence’ parameter, though.

In a loop we ask findImage() for all occurrences of the marker; the property object passed to findImage() has the occurrence value increased by 1 on each iteration. Once no additional marker can be found, findImage() throws an exception. We use this as a signal to stop looking and have our checkMarkers() function return a list of found occurrences.

import { Wildcard } from 'objectmaphelper.js';

var googleBrowserTab = {"title": "Google", "type": "BrowserTab"};
var googleQText = {"container": googleBrowserTab, "form": "tsf", "name": "q", "tagName": "INPUT", "type": "text", "visible": true};
var karteVonPackstationIMG = {"img_alt": new Wildcard("Karte von packstation*"), "tagName": "IMG", "visible": true};

function main() {
    var expectedNoOfMarkers = 3;
    doGMapsSearch();
    checkMarkers(expectedNoOfMarkers);
}

function doGMapsSearch() {
    startBrowser("https://www.google.com");
    typeText(waitForObject(googleQText), "packstation hamburg");
    typeText(waitForObject(googleQText), "<Return>");
    waitForObject(karteVonPackstationIMG);
}

function checkMarkers(expectedNoOfMarkers) {
    var threshold = 95;
    var m = findMarkersOnMap(threshold);
    test.compare(expectedNoOfMarkers, m.length,
            "Expected exactly " + expectedNoOfMarkers + " markers");
}

function findMarkersOnMap(threshold)
{
    var foundMarkers = [];
    var occ = 0;
    try {
        while (true) {
            var marker = findImage("gmap_marker_masked.png",
                    {'tolerant': 1, 'threshold': threshold, 'occurrence': ++occ},
                    karteVonPackstationIMG);
            foundMarkers.push(marker);
        }
    } catch (e) {
        if (!(e instanceof TypeError)) {
            throw e;
        }
        // We expect this exception after all markers were found
    }
    return foundMarkers;
}

You might have noticed the usage of findImage() (in contrast to waitForImage()), even though earlier we suggested to stick with the latter. We waited for the map image to be loaded before entering the loop. Thus we can expect that it is not changing anymore. But waitForImage() would make multiple attempts to locate a (non-existing) fourth marker. Test execution would be thus slowed down unnecessarily.

Outlook

In this example, we cared about the number of markers only and masked the text that is visible on each marker. Verifying the text on each marker would be possible as well. For achieving that, Squish offers a built-in Optical Character Recognition (OCR) approach. The function getOcrText() could be used to extract the text displayed on each marker.

The post How to Locate an Object on a Map appeared first on froglogic.

Test Impact Analysis

$
0
0

What is Test Impact Analysis?

Test Impact Analysis (TIA) is an optimization method used to determine which tests exercise a specific code change. The goal of TIA is to improve the efficiency of the software testing process, a goal critically important in development projects where the amount of test cases grows rapidly in tandem with changes to production code. 

A Test Impact Analysis is conducted with a code coverage tool. Utilizing a code coverage tool, engineers are able to prioritize which subset of their test suite(s) are run, through the tool’s determination of which changes are hit by a certain test. TIA with a code coverage tool can also uncover duplicate tests; that is, multiple tests which hit the same sources. Employing this analysis will enable your development team to streamline test automation, no matter who wrote the source code or how long ago it was written.   

This article will walk you through how to use Squish Coco for Test Impact Analysis. 

Coco’s Integration of TIA

Consider the following development scenario where you’d want to conduct a TIA. In a large project, a last-minute patch has to be evaluated. There is not enough time to run the entire test suite, but some risk assessment needs to performed. With Coco’s Patch Analysis feature, you can specifically display the code coverage for the changed lines of code, and find the tests in a large suite that cover them. You can then see how risky the changes are. 

You can try this feature yourself by following these instructions:

Prepare

If you have not already done so, adjust Coco to map your test cases to executions by name. Coco’s documentation lists the steps for some specific frameworks, including CppUnit and QTestLib. 

Instrument and run the complete test suite for your application. This will result in a csmes file and a csexe file. Import the data from the csexe file into the csmes file via the CoverageBrowser or the cmcsexeimport tool. 

Develop

Develop your project: write new code and add new features. 

Run the git command diff with the current changes:

$ git diff --staged > diff.txt

Analyze

Now analyze the difference with the cmreport tool:

$ cmreport -m report.csmes -p diff.txt \
--csv-excel=patchanalysis.csv \
--section=patch-execution 

This will result in a .csv file that holds all of the executions that are covering the changed parts of your code. 

We can now print out the test case names:

$ sed '1,/^"Execution Name.*$/d' patchanalysis.csv \
| grep '^"\K([^,"]+)' -oP

You can now feed your test framework with this information in order to execute only the mentioned test cases. This will greatly reduce the time spent on testing of future incremental changes to production code. 

For more info on Coco’s Patch Analysis feature, visit our documentation.

The post Test Impact Analysis appeared first on froglogic.

Testing WPF Popups and Tooltips

$
0
0

Introduction

By their nature, popups and tooltips disappear as soon as they lose keyboard focus, making it a bit hard to inspect their structure during test development. WPF Popups and Tooltips will get their own Squish types in Squish’s next major release: Popup and Tooltip, respectively, making working with them a bit easier. This blog will show you how to test these controls in the newest Squish.

Triggering Display

In order to test popups and tooltips, you first have to display them. For tooltips, that can usually be done by moving the mouse cursor over a control. The script function for that could be:

def trigger_tooltip(object_name):
    obj = waitForObject(object_name)          
    mouseMove(obj, obj.width / 2, obj.height / 2)
WPF Tooltip

For popups, you can also try to move the mouse cursor, but often the popups are triggered just by a mouse click. So, the standard Squish function clickButton can be used for that:

clickButton(waitForObject(names.Show_Popup_Button))
Wpf Popup

Finding Labels

Once the tooltip and popup appear on the screen, you can start searching for the labels inside of them. The following script code could be used for that purpose:

lables = findAllObjects({"container": {"type": "ToolTip"}, "type": "Label"})
...
lables = findAllObjects({"container": {"type": "Popup"}, "type": "Label"})

If you would like to see the structure of these controls when displayed, you can use the Squish function: saveObjectSnapshot:

tooltip = waitForObjectExists({"type": "ToolTip"})
saveObjectSnapshot(tooltip, "tooltip_snapshot.xml")
...
popup = waitForObjectExists({"type": "Popup"})
saveObjectSnapshot(popup, "popup_snapshot.xml")

Later on, you can inspect these snapshots and figure out the structure of the controls, which is often useful if the structure is unknown or more complex.

For searching the labels you can also use the function find_labels from our Knowledge Base article, Testing WPF Tooltips. This function can give you a bit more flexibility and control if you need it.

And, of course, you can also put {"type": "ToolTip"} and {"type": "Popup"} into the scripted object map so that you have something like names.tooltip and names.popup, which is a bit nicer and easier to maintain.

Verifying Text

Once you get the labels, you can obtain their text by using the following function:

def get_text(labels, separator=" "):
    text = [ label.text for label in labels ]
    return separator.join(text)

The function will iterate over the labels array and collect the text property of the labels.

Finally, once you have the text, you can use the standard Squish functions for verifications, as in the example below:

test.compare(get_text(lables), "Example Popup Popup Content")

Or, if you would like to have a different separator between the labels, you can use:

test.compare(get_text(lables, "/"), "Example Popup/Popup Content")

Example

The example application and Squish test suite used in this blog post can be download here: WpfPopupAndTooltip.zip

The post Testing WPF Popups and Tooltips appeared first on froglogic.

How to Choose Which GUI Tests to Automate

$
0
0

Automating Graphical User Interface (GUI) tests is a challenging task. In theory, any test can be automated, but it is not worth it to automate every test, often because of limited resources (i.e., time spent on writing the automated test.) So how do you decide which test cases are worthy of automation? Specific requirements will depend on the product, team skillset, time constraints and tool limitations. In this article, we’ll list the factors by which you can evaluate a test as a potential automation candidate.

  1. Repetitive Test Runs (R)
    Evaluates how often a given manual test is executed. If the test should be run every time you release your product as part of regression testing, then it should be considered for automation. When automated, you will run this test not every time you release, but every time you build the application under test (AUT).

  2. Test Severity (S)
    This factor represents how critical the feature is to the AUT. This should be evaluated by all stakeholders, giving tests for the critical parts of your application a priority over tests for lower-severity areas.

  3. Configurations (C)
    The number of configurations a given test needs to be run on. If a certain test needs to be run on varying software configurations, across multiple platforms or using different test datasets, this indicates the test is a good candidate for automation.

  4. Time to Execute the Test Manually (T)
    This factor represents the time needed to execute the test manually on a given configuration.

  5. Test Performance (TP)
    This factor represents the historical failure detection rate of the given test. That is, determining if a test’s execution leads to finding a defect. Modern test management systems, like Squish Test Center, can provide such information. On the other hand, there are tests that are executed over many years, but never result in a new defect being found.

  6. Ease of Automation (A)
    Describes the level of difficulty in automating a test. Here you need to consider test complexity, test automation tool limitations, programming skills of test developers and, finally, the nature of the given test. A good automation candidate will have results that are precise and deterministic, and which can be evaluated by a computer program.

Automation Candidate Factor

For each manual test, an Automation Candidate Factor can be calculated. Evaluate each factor giving it a weighting from 1 to 3. The example formula can be: ACF = (R + S + C + T + TP)/5 + A. With this factor, you can allocate your limited resources to automate tests that are truly worth automating.

Regardless of the priority you assign to a given manual test, the quality of the manual test preceding automation is also an important factor. Poor quality manual tests lead to poor quality automated tests. Are all the steps in the test case well described? Are expected results precise? Investing time in improving the quality of manual tests before automating them is, in general, a good idea.

The Ease of Automation (A) factor depends on the tool that you are using. The Squish GUI Tester is the preferred tool for creating automated GUI functional regression and system tests for all desktop, mobile, web and embedded platforms. It allows developing tests in Python, JavaScript, Perl, Ruby and Tcl. It also supports verifying that your application’s GUI works as expected by verifying object properties, screenshot comparison and visual verification (content, geometry, topology, appearance). The newest additions in verification capabilities include on-screen image and OCR-based text recognition.

The post How to Choose Which GUI Tests to Automate appeared first on froglogic.

Test Case Prioritization Using Code Coverage Analysis

$
0
0

What is Test Case Prioritization?

Test Case Prioritization is a method in which the execution order of test cases is scheduled to maximize software testing efficiency.

Consider a common dilemma in the software development lifecycle: testing must be conducted, but there is not enough time to run the full regression suite. Through a prioritization approach, one can schedule test executions such that only a subset of tests covering the most source code is run. This method reduces the time required to test the software, while maximizing testing efficiency.

But how can we determine the correct subset of tests? And, by extension, how can we determine the source coverage provided by our test suite? The answer: using a code coverage tool.

Code Coverage Analysis

froglogic’s Squish Coco is a multi-language code coverage analysis toolchain. Using automatic source instrumentation, Coco can determine the source coverage given by manual, unit, GUI or integration test executions. It offers a feature specifically for Test Case Prioritization. Coco’s CoverageBrowser program will calculate an execution order for tests with which high code coverage can be reached quickly using a small number of tests. Coco will provide an execution order in which the test with the highest coverage is listed first. The second test is that one which makes the additional code coverage as high as possible, and so on. All of this is calculated automatically, with a push of a button.

We’ll look into Coco’s method of Test Case Prioritization using a simple C++ program which acts as a parser (or, calculator) for basic expressions. This example is included in all Coco packages.

A Step-by-Step Example

The parser example includes a set of unit tests. In order to make use of Coco’s Test Case Prioritization, you must first instrument the application and execute all unit tests once. The documentation includes a step-by-step guide on doing just that.

Once the parser program is compiled with instrumentation and the tests are run, we’ll view the coverage results in the CoverageBrowser. Two files are generated after executing the tests: unittests.csmes and unittests.csexe. The former contains information needed for the coverage measurement, while the latter contains the results of code execution.

Load these into Coco’s CoverageBrowser:

$ coveragebrowser -m unittests.csmes -e unittests.csexe

To calculate an optimized test execution order, follow these steps:

  1. Select the executions in the Execution window. (In this example, one test failed — you can deselect this unit test.)
  2. From the top menu bar, choose View > Optimized Execution Order…
  3. In the window that appears, click Compute.

The window displays the prioritization. It shows the aggregated coverage level and aggregated time spent executing these tests. Note: for each new line in the results, it displays the cumulated coverage. For example, the unit test testVar provides 29.208% code coverage with a compute time of 0.000681 seconds. Line 2, testSyntax, lists the additive coverage of both testSyntax and the preceding test, testVar, at 32.921%.

We selected 13 of the 14 total unit tests to prioritize. However, you’ll notice only 11 unit tests are displayed in the Optimized Execution Order window. The optimization only focuses on coverage. That is, if two tests cover the same line of code, Coco will recognize them as redundant and will ignore one of them. This makes sense for e.g., a smoke test, but is not recommended for a full functional test run.

Optimized Execution Order

Now, in future rounds of testing where time is a limiting constraint, you can select only those tests which provide the highest coverage in the shortest amount of time. This proves useful also if, for example, your testing method includes many manual tests.

Recommendations for Future Reading

Test Impact Analysis, another optimization method to improve testing efficiency, is used to determine which tests execute a specific code change. This analysis is useful when, for example, you’ve committed a last-minute patch to the source code, but there is not enough time to run the full test suite. You’d still want to conduct some risk assessment. With Test Impact Analysis,  you can specifically display the code coverage for the changed lines of code, and find the tests in a large suite that cover them. We’ve written an article on this topic, which you can find here.

Summary

Test Case Prioritization can save a lot of time and effort, especially in development scenarios where time is a limiting constraint. With a code coverage tool like Squish Coco, you can determine the optimized execution order with a click of a button.

The post Test Case Prioritization Using Code Coverage Analysis appeared first on froglogic.

Viewing all 398 articles
Browse latest View live