Quantcast
Channel: froglogic
Viewing all 398 articles
Browse latest View live

Bulk Verifications of Similar Objects

$
0
0

Squish offers a number of verification points, or VPs: object property verification, screenshot verification, table verification or visual verification. We can create verification points during the initial recording of a test case or while recording a snippet. When doing so we need to select an object that we want to verify, which is easily done using the Pick Tool, a tool that allows us to point to the object. But what if we want to verify multiple objects in the GUI and picking all of them is not convenient? For example, we might want to verify a size of all labels on the screen or validate all cells in a given table column. This article describes how such bulk verifications can be done easly.

The presented solution takes advantage of the findAllObjects function introduced in Squish 6.4. All code snippets are written in Python for an example Qt Widgets application.

Verify All Labels

To verify all labels in a given window, let’s first define a helper function to find all objects by type:

def findObjectsByTypeOnWindow(objType, window):
return findAllObjects({"type": objType, "visible": 1, "window": window})

Here is an example usage of the above function to verify the size and style of the labels:

labels = findObjectsByTypeOnWindow('QLabel',names.address_Book_Add_Dialog)
for obj in labels:
test.startSection("Verify label: " + str(obj.text))
test.compare(obj.font.pointSize, 8, "pointSize is 8")
test.compare(obj.font.bold, False, "is not bold")
test.compare(obj.font.italic, False, "is not italic")
test.endSection()

Verify All Cells in a Column

Static tables can be verified using table verification points. This approach is good when we want to verify the entire table which always has the same size. But what if table size varies, and we would like to verify just one column? Imagine, also, that we don’t want to check the exact values, but we would like to define a condition for the values.

Let’s start by defining a validator function with a condition that a cell must meet. In the example below, we implement a function which verifies if a cell has a positive integer value. You can easily create your own validators that can check other conditions, for example if a value is in a given range or if it matches a string pattern.

def positiveValidator(n):
return n.isdigit()

Now, let’s define a function which validates all cells in a column.

def validateCellsInColumn(column, table, validator):
cells = findAllObjects({"column": column, "container": table,"type": "QModelIndex"})
for cell in cells:
test.verify(validator(cell.text), "Validate cell with text: " + cell.text)

Finally, a code snippet from a test case where this function is called:

validateCellsInColumn(3, names.address_Book_Unnamed_File_QTableWidget, positiveValidator)

The post Bulk Verifications of Similar Objects appeared first on froglogic.


Video: Analyzing Dependencies with Dependency Walker

$
0
0

Learn how to analyze dependencies of your application to find a matching Squish for Qt package that is binary-compatible with your application under test. Dependency Walker is a free tool that can help find out which libraries and library versions your application depends on, like the Qt libraries. In addition, it can be used to reveal the compiler and compiler versions which have been used to build the application. Both help to identify the Squish for Qt package that is compatible with your application.


Some helpful links:

Dependency Walker Project Page
Knowledge Base Instructions for the Dependency Walker
Requirements List of Squish for Qt

The post Video: Analyzing Dependencies with Dependency Walker appeared first on froglogic.

Explore Squish’s Object- and Image-based Recognition Capabilities in This Month’s Article from Test Magazine

$
0
0

The article linked below explores Squish’s unique capabilities for Object-based and Image-based recognition, showing you when one is the preferred method or when combining both in a single test case is most suitable for creating robust tests with long-term stability.

The post Explore Squish’s Object- and Image-based Recognition Capabilities in This Month’s Article from Test Magazine appeared first on froglogic.

Demo: API Testing Using Squish

$
0
0

Testing is an important part of the development of a software, the idea being that the more thorough the tests, the higher the chances are of discovering code defects or bugs. While Squish focuses on GUI testing, it can also be used for other kinds of testing, including what we would like to discuss in this article: API testing.

An API, or Application Programming Interface, is a GUI-less element of an application whose purpose is usually to provide data to other parts of the software (commonly the UI), to update the data storage/state of the software, and more. Following this description we can see that API testing is a regrouping of different kinds of testing:

  • Functional testing, validating the behavior against a set of expectations
  • Security testing, validating authentication, access control and/or encryption
  • Load testing, validating functionality and performance under load
  • And more

Given most of these examples are good automation candidates, we want to present an example of how this can be done with Squish. We will limit ourself to a simple functional test of a small REST API but the process can easily be extended to other kinds of testing or other types of APIs (SOAP, to name another). More about REST APIs can be found here.

The API

Since the focus is not on the API implementation but rather on how to test it with Squish, we will make the choice of simplicity and use an already available simple REST API that can be accessed here: https://reqres.in/. Changes made via REST calls on the different endpoints are not persistent, the interesting part for us is sending/receiving possible real world data.

The Tests

Squish supports writing test cases in two forms: through pure scripting or through what is known as Behavior-Driven Development (BDD) tests. BDD tests are written using the Gherkin syntax, which allows developing a complex test case while maintaining its comprehensibility to non-technical users. More info on BDD and Gherkin can be found here and here, respectively.

As pointed out above, we want to showcase how our simple example of functional testing of the API would look like assuming one of the two approaches:

  • Business, or non-technical, users authoring a test case
  • Technical users authoring a test case

We’ll leave it to the reader to decide which approach he or she is more comfortable with, or both. The list of BDD scenarios we propose is not an exhaustive list of all possible scenarios, and a number of corner cases are not covered for the sake of simplicity.

Business Approach

The business, or non-technical approach, typically bases the tests on use case, leading to more verbose and understandable scenarios.

Tests written or designed by non-technical users are usually extracted from the product requirements or a common use case. This leads to expressive scenarios and steps that contain, sometimes, almost no technical information. The downside to this approach is that there is a need for clear communication between non-technical users’ intent with each scenario, and the technical users actually implementing the steps based on the expectations.

Here is an example feature file:

Feature: Testing a REST API

  Scenario: User registration is unsuccessful without password
	When user sends a registration request without password
	Then the server returns an error status code
	And a payload containing an error message

  Scenario: User can create another user
	When user sends a user creation request
	Then the server returns a success status code
	And a payload containing user data

  Scenario: User can browse the list of all available colors and delete one of them
	Given the user has fetched the list of all colors
	When user sends a delete request for one of them
	Then the server returns a success status code

And the corresponding steps file:

import * as names from 'names.js';

When("user sends a registration request without password", function(context) {
    var client = new XMLHttpRequest();
    client.open('POST', 'https://reqres.in/api/register', false);
    client.setRequestHeader("Content-Type", "application/json");
    var data = '{"email": "tester@froglogic.com"}';
    client.send(data);
    context.userData["sentData"] = data;
    context.userData["status"] = client.status;
    context.userData["response"] = JSON.parse(client.response);
});

Then("the server returns an error status code", function(context) {
    test.verify( context.userData["status"] == 400 );
});

Then("a payload containing an error message", function(context) {
    var response = context.userData["response"];
    test.verify( response["error"] !== undefined );
});

When("user sends a user creation request", function(context) {
    var client = new XMLHttpRequest();
    client.open('POST', 'https://reqres.in/api/users', false);
    var data = {"email": "tester@froglogic.com"};
    client.send(data);
    context.userData["sentData"] = data;
    context.userData["status"] = client.status;
    context.userData["response"] = JSON.parse(client.response);
});

Then("the server returns a success status code", function(context) {
    var status = context.userData["status"];
    test.verify( status == 200 || status == 201 || status == 204 );
});

Then("a payload containing user data", function(context) {
    var response = context.userData["response"];
    test.verify( response["id"] !== undefined );
    test.verify( response["createdAt"] !== undefined );
});

Given("the user has fetched the list of all colors", function(context) {
    var client = new XMLHttpRequest();
    var current_page = 1;
    var last_page;
    var colors = [];

    do {
    client.open('GET', 'https://reqres.in/api/colors?page='+current_page.toString(), false);
    client.send();

    var response = JSON.parse(client.response);
    last_page = response["total_pages"];
    
    for (var i =0; i < response["data"].length; ++i) {
        colors.push(response["data"][i]);
    }
    current_page++;
    } while (current_page != last_page) ;

    context.userData["status"] = client.status;
    context.userData["colors"] = colors;
    context.userData["response"] = JSON.parse(client.response);
});

function getRandomInt(max) {
    return Math.floor(Math.random() * Math.floor(max));
}

When("user sends a delete request for one of them", function(context) {
    var random = getRandomInt( context.userData["colors"].length - 1 );
    
    var client = new XMLHttpRequest();
    client.open('DELETE', 'https://reqres.in/api/colors/'+random.toString(), false);
    client.send();
    context.userData["status"] = client.status;
});

Technical Approach

It is common in this approach to have technical information be displayed in the steps, clearly stating the input and output that is expected from the tested product.

Feature: Testing a REST API

  Scenario: User registration is unsuccessful without password
	When user sends 'POST' request to '/api/register' with the following data:
		| email |
		| tester@froglogic.com |
	Then the server returns '400' as status code
	And the following payload:
		| error |
		| Missing password |

  Scenario Outline: User can create multiple users
	When user sends 'POST' request to '/api/users' with the following data:
		| email | 
		| <mail_address> |
	Then the server returns '201' as status code
	And the following payload:
		| id | createdAt |
  Examples:
	| mail_address |
	| xx@xx.x |

  Scenario: User can browse the list of all available colors and delete one of them
	Given the user has fetched all pages from '/api/colors'
	And assign one of the id to 'random'
	When user sends "DELETE" request to '/api/colors/<random>'
	Then the server returns '204' as status code
import * as names from 'names.js';

When("user send '|word|' request to '|any|' with the following data:", function(context, method, url) {
    var data = {};
    var table = context.table;
    var headers = table.shift();

    for(var j=0; j < headers.length; ++j){
        data[headers[j]] = table[0][j];
    }
    
    var client = new XMLHttpRequest();
    client.open(method, 'https://reqres.in'+url, false);
    client.setRequestHeader("Content-Type", "application/json");
    client.send('{"email":"test@aa"}');
    context.userData["sentData"] = data;
    context.userData["status"] = client.status;
    context.userData["response"] = JSON.parse(client.response);
});

Then("the server return '|integer|' as status code", function(context, code) {
    test.verify( context.userData["status"] == code);
});

Then("the following payload:", function(context) {
    var table = context.table;
    var properties = table.shift();
    var payload = context.userData["response"];
    
    for(var j=0; j < properties.length; ++j){
        test.verify( payload[properties[j]] !== undefined );
        if(table.length > 0){
            test.verify(payload[properties[j]] == table[0][j]);
        }
    }
});

Given("the user have fetch all pages from '|any|'", function(context, url) {
    var client = new XMLHttpRequest();
    var current_page = 1;
    var last_page;
    var data = [];

    do {
    client.open('GET', 'https://reqres.in'+url+'?page='+current_page.toString(), false);
    client.send();

    var response = JSON.parse(client.response);
    last_page = response["total_pages"];
    
    for (var i =0; i < response["data"].length; ++i) {
        data.push(response["data"][i]);
    }
    current_page++;
    } while (current_page != last_page) ;

    context.userData["status"] = client.status;
    context.userData["data"] = data;
    context.userData["response"] = JSON.parse(client.response);
});

function getRandomInt(max) {
    return Math.floor(Math.random() * Math.floor(max));
}

Given("assign one of the id to '|word|'", function(context, name) {
    context.userData[name] = getRandomInt( context.userData["data"].length - 1 );
});

When("user send '|word|' request to '|any|'", function(context, method, url) {
    var client = new XMLHttpRequest();
    client.open(method, 'https://reqres.in'+url, false);
    client.send();
    context.userData["status"] = client.status;
    
    if(client.responseText.length > 0){
        context.userData["response"] = JSON.parse(client.response);
    }
});

The post Demo: API Testing Using Squish appeared first on froglogic.

Video: Creating Reusable Test Functions Through Shared Script Libraries

$
0
0

Learn how to create test script modules or test frameworks to make your GUI tests maintainable. Squish supports you in doing this through the concept of Shared Scripts and Global Scripts. While Shared Scripts group together test code within a Test Case or Test Suite, Global Scripts can be used to share code with a number of Test Suites. As for all Squish API functions, the import of these scripts is available in all Squish scripting languages (Python, JavaScript, Perl, Tcl and Ruby).

For more reading, check out a previous Tip of the Week where we talk about the how-to for turning recorded scripts into compact function calls, here.

Some useful documentation:

How to Create and Use Shared Data and Shared Scripts
source Function Documentation
findFile Function Documentation
Script Modulariziation




The post Video: Creating Reusable Test Functions Through Shared Script Libraries appeared first on froglogic.

Integrating Java Code Coverage Tools With Squish Tests

$
0
0

In test-driven development, a common challenge is to decide which tests to write and how many are necessary. Ideally, one would have as many tests as there are possible deviations in a program’s behavior. This is often very hard to achieve though, so it is necessary to determine how much of an application’s logic is covered by tests. This is often done through code coverage tools, which exist for a variety of platforms and languages.

Using the Java code coverage tool, JaCoCo, we’re going to demonstrate how to use a code coverage tool together with Squish so that coverage reports are being generated that can tell which part of the application logic is being covered by which Squish test case. This makes it possible to identify test cases that duplicate other tests – in terms of coverage of the application logic – and areas which no test case yet covers.

Creating a Short Test

We are creating a short and simple test case in Squish and then extending this to include JaCoCo. The test uses our addressbook example for Java/Swing and will create a new entry in the addressbook and then quit the application.

Using the Squish IDE to record the steps yields a scripted test case like this one:

import names

def main():
    startApplication("AddressbookSwing.jar")
    activateItem(waitForObjectItem(names.address_Book_JMenuBar, "File"))
    activateItem(waitForObjectItem(names.file_JMenu, "New..."))
    activateItem(waitForObjectItem(names.address_Book_Unnamed_JMenuBar, "Edit"))
    activateItem(waitForObjectItem(names.edit_JMenu, "Add..."))
    type(waitForObject(names.address_Book_Add_Forename_JTextField), "Andreas")
    mouseClick(waitForObject(names.address_Book_Add_Surname_JTextField), 68, 22, 0, Button.Button1)
    type(waitForObject(names.address_Book_Add_Surname_JTextField), "Pakulat")
    mouseClick(waitForObject(names.address_Book_Add_Email_JTextField), 40, 25, 0, Button.Button1)
    type(waitForObject(names.address_Book_Add_Email_JTextField), "abc@de.com")
    mouseClick(waitForObject(names.address_Book_Add_Phone_JTextField), 29, 14, 0, Button.Button1)
    type(waitForObject(names.address_Book_Add_Phone_JTextField), "123456")
    clickButton(waitForObject(names.address_Book_Add_OK_JButton))
    activateItem(waitForObjectItem(names.address_Book_Unnamed_JMenuBar, "File"))
    activateItem(waitForObjectItem(names.file_JMenu_2, "Quit"))
    clickButton(waitForObject(names.address_Book_No_JButton))

Running the Test With JaCoCo Instrumentation

JaCoCo has to run as part of the application to be able to generate coverage information as explained in its Command Line Interface documentation. An easy way to apply the command line arguments in a Squish test is to register the java executable as the AUT instead of the AUT’s jar file. The startApplication invocation will then be modified to look exactly the same as a manual start of the AUT with JaCoCo from a command window.

Integrating the Squish test case name into the report file name enables the mapping of the coverage data back to a particular test case. The following script snippet demonstrates the startup procedure – including the cleanup of leftover report files from the last execution:

    # Some reused paths, adjust to your system
    jacocoInstallDir = "/Users/andreas/Downloads/jacoco-0.8.3/"
    addressbookDir = "/Users/andreas/squish/packages/squish-6.4.3-java-mac/examples/java/addressbook"
    javaApp = "%s/AddressBookSwing.jar" % addressbookDir
    testcaseName = os.path.basename(squishinfo.testCase)
    jacocoReport = "%s/%s_jacoco.exec" % (addressbookDir, testcaseName)
    
    # Clean up existing reports
    if os.path.exists(jacocoReport):
        os.remove(jacocoReport)
    
    # Include jacoco when starting the AUT and tell it where to store the report
    startApplication("java -javaagent:%s/lib/jacocoagent.jar=destfile=%s -jar %s" % (jacocoInstallDir, jacocoReport, javaApp))

Executing this test case will now write a JaCoCo report file when the AUT terminates. In order to ensure that the report file is created before starting with the next part – the generation of a HTML report – a short synchronization block is necessary. In this short example it is sufficient to wait for the report file to appear on the hard disk. In larger applications, where writing a report may take a bit, the synchronization may need to include the size of the file or just a fixed amount of time.

    # ensure the report file is there before continuing
    while not os.path.exists(jacocoReport):
        snooze(1)

Visualizing the Code Coverage Data

The execution report file that JaCoCo generates is not human readable, but it can be used with analyzers/visualizers in CI systems, like Jenkins code coverage view. It is also possible to generate a human-readable HTML report as part of the Squish test with JaCoCo’s command line interface. The following snippet shows how this can be achieved: Invoking the command line interface using Python’s standard subprocess module that allows to run arbitrary commands. The test case name is again being made part of the HTML report’s name so reports from different test cases can be differentiated and analyzed separately. This enables the identification of missing tests as well as duplicated ones using the HTML report.

    # Generate a html report in a coveragereport subdirectory using the application jar
    # for the class files argument to avoid extracting those
    subprocess.check_call(["/usr/bin/java",
                           "-jar", "%s/lib/jacococli.jar" % jacocoInstallDir,
                           "report", jacocoReport,
                           "--classfiles", javaApp,
                           "--sourcefiles", os.path.dirname(javaApp),
                           "--name", "AddressBook Test %s" % testcaseName,
                           "--html", htmlReportDir])

This snippet uses a new variable htmlReportdir that has been added at the beginning of the main function along with a corresponding cleanup step to cleanup that file if it exists:

    htmlReportDir = "%s/%s_coveragereport" % (addressbookDir, testcaseName)

    # Clean up existing reports
    if os.path.exists(jacocoReport):
        os.remove(jacocoReport)
    if os.path.exists(htmlReportDir):
        shutil.rmtree(htmlReportDir)

The resulting HTML report for our example shows that the test already covers quite a bit of the example application as seen in this screenshot:

Conclusion

Using JaCoCo’s command line interface tools, it is possible to generate code coverage execution reports for each test case – including an HTML report for consumption by humans. Having the relation between code coverage data and test cases allows a better understanding of which tests have to be written and which are not that useful.

While not demonstrated here, it is perfectly possible to apply the same idea to a BDD test case. In such a test, the application startup and setup of the variables would likely be done as part of an OnScenarioStart BDD hook script and the variables for the report files and JaCoCo directory can be passed to the report generation by adding them to the context object available to such hooks. The HTML report generation and synchronization would be moved to a corresponding OnScenarioEnd BDD hook script and would use the variables available from the context for accessing the JaCoCo report. The filename of the JaCoCo report could include the Scenario title to further break down the code coverage information to the scenario level of a BDD test.

You can download the complete Squish testsuite demonstrating the use of JaCoCo.

The post Integrating Java Code Coverage Tools With Squish Tests appeared first on froglogic.

Meet froglogic, the Vendor of the Squish GUI Tester and Squish Coco, at This Year’s STAR EAST

$
0
0

froglogic, the makers of the automated GUI Testing Tool Squish and Code Coverage Analysis Tool Coco, will exhibit at this year’s STAR EAST conference in Orlando, Florida, this May 1st – 2nd. 

Wondering if Squish or Coco is the right match for your needs? Check out some of the Q&A’s below, and meet us in person for in-depth discussions on these questions and more during the conference:

Why Squish?

The Squish GUI tester is unique in that it offers unparalleled support for all major GUI technologies, operating on all desktop, mobile, web and embedded platforms. 

I need to test my application across multiple platforms. 

No problem. Squish is a 100% cross-platform tool, requiring no changes to your tests when porting them to different platforms. 

What about non-standard UI controls? 

Unlike other tools on the market whose focus is on one object recognition method, Squish offers both property-based recognition and image-based recognition methods to identify all types of UI controls, including anything from standard menu dropdowns and buttons, to non-standard third-party controls and 2D/3D graphics or plots. Both methods may be used standalone or combined in a single test case. 

How can I get non-technical project stakeholders involved in the testing process?

The Behavior Driven Development (BDD) approach centers around stories written in common language that describe the expected behavior of an application, allowing technical and non-technical project stakeholders to participate in the authoring of feature descriptions, and therefore tests. The Squish IDE provides never before seen tooling support to create, record, maintain and debug Behavior Driven GUI Tests. 

What are the advantages to bringing Squish Coco into my development processes?

Squish Coco, a multi-language, cross-platform/-compiler code coverage analysis toolchain, uses automatic source code instrumentation to measure test coverage without requiring any changes to an application. Use Coco to find out how much of your tests are hitting your code, and which parts of your source code need more attention. Often, Coco is the tool of choice for many customers looking to achieve safety certifications in their safety-critical applications. 

What if I need help?

froglogic is routinely ranked exceptionally high on standards of customer technical support. With over 75% of the team working in development and technical support roles, your questions will be answered in a manner that is both timely and thorough. froglogic also offers personalized one-on-one and group training, in addition to consulting services. In short, all your needs will be met by our support staff. 

Our experts will be available to demo powerful features of both products, helping new users get started with our tools and aiding existing users with furthering their testing efforts.

Attendees can find froglogic at booth #20.

To get in touch with us or to schedule a meeting with a froglogic representative, please contact sales@froglogic.com.

We’ll see you there!

The post Meet froglogic, the Vendor of the Squish GUI Tester and Squish Coco, at This Year’s STAR EAST appeared first on froglogic.

Video: Image-based Testing with Squish GUI Tester

$
0
0

Image-based testing can be a great addition to the object-aware testing automation approach of Squish GUI Tester. It helps to interact with objects which would not be recognized otherwise, e.g. 3D-painted GUI controls or custom complex GUI controls. It may also be used to automate controls which are not part of the application under test itself. Learn more about Image-based recognition in today’s video:

Here is an example tutorial on Image-based testing, found in our documentation.

The post Video: Image-based Testing with Squish GUI Tester appeared first on froglogic.


Leveraging Python Packages For Better UI Testing

$
0
0

Python is a very popular language, and for good reason. A wealth of production-quality packages for performing all kinds of tasks is freely available on the Internet. But did you know that all this power is readily available in Squish tests, too? This article explains how to extend the Python interpreter shipped with Squish with different popular Python packages.

The Powerful Python Ecosystem

It comes as no surprise that many Squish users choose Python for developing automated GUI tests. Powerful language features, a clean and straightforward syntax, plus a wealth of documentation make it a perfect fit for professional GUI test automation scripts using froglogic’s Squish. However, one big advantage of Python over e.g. JavaScript is often neglected during GUI test development.

A wide range of Python programming tasks doesn’t need to be solved from scratch. Instead, a vast ecosystem of Python packages for solving all kinds of common (and uncommon) tasks is readily available – free of charge. Reusing this battle-tested code can prove to be a real accelerator for developing robust and reliable GUI test scripts.

To keep things manageable, these thousands of packages are maintained in a single place: the Python Package Index (PyPI). This provides a single point of contact to the Python package ecosystem, and an efficient search facility will typically find an existing Python package to solve most common tasks.

Installing Packages For Squish Tests To Use

PyPI is typically accessed using a command line tool called pip. This utility performs common maintenance tasks such as listing, installing, removing or updating installed Python packages. However, the Python installation shipped with Squish does not include the pip utility out of the box – this is to keep the package lean.

The first step to making packages stored in PyPI accessible to Squish test scripts is to install pip. This is a very simple three-step process:

  1. Download the get-pip.py installer script (if you are using Python 3, you will need to get the get-pip.py script for Python 3 instead)
  2. Open a command window and navigate to the Squish installation directory (e.g. C:\Users\Frerich\Squish for Web 6.4.3)
  3. Run the command python\python.exe C:\path\to\get-pip.py

The get-pip.py script will be executed, downloading the pip program and installing it. This will cause plenty of output to be printed to the console, like this:

C:\Users\Frerich\Squish for Web 6.4.3>python\python.exe C:\Users\Frerich\Desktop\get-pip.py
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
Collecting pip
Using cached https://files.pythonhosted.org/packages/d8/f3/413bab4ff08e1fc4828dfc59996d721917df8e8583ea85385d51125dceff/pip-19.0.3-py2.py3-none-any.whl
Collecting setuptools
Using cached https://files.pythonhosted.org/packages/c8/b0/cc6b7ba28d5fb790cf0d5946df849233e32b8872b6baca10c9e002ff5b41/setuptools-41.0.0-py2.py3-none-any.whl
Collecting wheel
Using cached https://files.pythonhosted.org/packages/96/ba/a4702cbb6a3a485239fbe9525443446203f00771af9ac000fa3ef2788201/wheel-0.33.1-py2.py3-none-any.whl
Installing collected packages: pip, setuptools, wheel
The script wheel.exe is installed in 'C:\Users\Frerich\Squish for Web 6.4.3\python\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pip-19.0.3 setuptools-41.0.0 wheel-0.33.1

The installation should finish within a couple of moments. At the end, the pip utility can be found in the Scripts subdirectory.

Verify that the installation worked by asking pip to print its version number:

C:\Users\Frerich\Squish for Web 6.4.3>python\Scripts\pip -V
pip 19.0.3 from c:\users\frerich\squish for web 6.4.3\python\lib\site-packages\pip (python 2.7)

Congratulations! Thousands of Python packages are now available to your GUI test scripts!

Popular Python Packages

At this point, installing Python packages is just a mere pip install invocation away. In a command shell window, navigate to your Squish installation directory and run

python\Scripts\pip install PACKAGE 

where PACKAGE is the name of whatever awesome package whetted your appetite when browsing PyPI.

To get you started, here are a few great Python packages we used when creating Squish tests ourselves:

  • RPyC (install via ‘pip install rpyc’) is a lightweight but extremely powerful module for automating remote computers. After launching a tiny Python script on the remote machine, your Squish tests are able to execute arbitrary actions on the remote system in a very convenient way. This is great for e.g. preparing the remote system before a test, fetching system information during a test or cleaning up afterwards.
  • untangle (install via ‘pip install untangle’) is a tiny module which can be used to parse arbitrary XML files into Python objects – in a single line of code. This is great for reading all kinds of configuration files and is much easier to use than the standard xmllib Python module. Of course, it’s not as powerful — but sometimes you’d rather want to have just as much complexity as necessary instead of getting as much power as possible.
  • requests (install via ‘pip install requests’) is the de-facto standard for everything related to HTTP. Need to access a REST API in your Squish test? Upload some data to a web server? Fetch a status page? Look no further – the requests module has you covered. As the home page puts it: requests is “HTTP for Humans™”.

Final Words

At this point. you’re ready to augment your GUI test scripts by reusing any of the thousands of Python packages. Head over to the Python Package Index and see whether there’s a ready made package you can use instead of reinventing the wheel again and solving the problem manually!

What are your preferred Python packages when developing GUI tests using Squish? Let us know in the comments – we’d love to hear from you!

The post Leveraging Python Packages For Better UI Testing appeared first on froglogic.

Video: Converting Text-based Object Maps to Script-based Object Maps

Testing Multiple Applications In One Test Case Using Separate Squish Packages

$
0
0

Automating more than one application in a single test case is doable but what if the UI technology differs? We would like to show you how to handle two applications in a single test case, using two different Squish packages and squishservers as well as advantages and disadvantages of this setup. Let’s get started.

Example Setup

We’ll use a Windows application and a website in this example since this is a common setup. The Windows example application is the addressbook example that is shipped with every Squish for Windows package. Having Squish for Windows and Squish for Web (+ browser extension) on the machine is set as a precondition.

Since we need a squishserver for each of the packages, we start them via the command line:

C:\Users\franke\squish\squish-6.4.1-windows\bin>squishserver --port=4444
C:\Users\franke\squish\squish-6.4.1-web-windows\bin>squishserver --port=5555

The squishserver handles the communication with the Application Under Test. In this case, this is with the web browser (website) and with the addressbook application.

Now we can begin to develop our test.

Information that is important for external squishservers as well as the Windows and web support need to be stored in variables.

    winAutName = "Addressbook"
    squishPackageWindows = "C:\\Users\\franke\\squish\\squish-6.4.1-windows"
    squishPackageWeb = "C:\\Users\\franke\\squish\\squish-6.4.1-web-windows" 
    website = "https://www.froglogic.com"
    squishServerWeb = 5555
    squishserverWin = 4444
    host = "localhost"

To be able to use the toolkit support of a different Squish package we need to change the environment variable SQUISH_PREFIX. That way, we can execute the test case from any package without a problem.
Setting the correct wrapper and starting the __squish__webhook as the AUT is needed to start a browser and inject our hook mechanism.

    os.environ["SQUISH_PREFIX"] = squishPackageWeb
    testSettings.setWrappersForApplication("__squish__webhook", ["Web"])
    ctx_web = startApplication("__squish__webhook","localhost", squishServerWeb)
    startBrowser(website)

For switching between the Windows-based application and the browser, two things are needed:

  • Application Context, to send any kind of commands (e.g. button press, typing, etc.) to the right application
  • Toplevel API, to bring the application into the foreground and set the input focus
   setApplicationContext(ctx_win);

 winToplevel = ToplevelWindow.byObject(waitForObject(names.address_Book_Unnamed_Window))
    winToplevel.minimize()

Sending commands to an application which doesn’t have the input focus could cause some problems. Therefore Squish provides a so called “Toplevel API” which helps to bring an application into the foreground, set the focus or minimize it. This does not work for web browsers.

Disadvantages

  • You are not able to record properly on all used applications because of missing support in the Squish package that is used to execute the test script.
  • Since multiple Squish installations are present, these need more storage capacity then a single package would need.

Advantages

  • You can create a working setup easily on your own.
  • Upgrading/Exchanging one of the used packages can be done without much effort.

Conclusion

Using separate Squish packages has it’s advantages and disadvantages but you have to decide on your own which way you want to go.

Instead of separate packages you can reach us and get a combination package. That way you only have one Squish installation as well as a single squishserver which can handle the different toolkits.

The post Testing Multiple Applications In One Test Case Using Separate Squish Packages appeared first on froglogic.

Video: GUI Testing of Embedded Devices

$
0
0

Watch how to automate GUI tests for applications running on embedded devices easily. The Squish GUI Tester’s architecture supports remote access to embedded hardware to record and playback user interaction, verifications and validations. Squish features you might know from desktop testing are fully supported. In addition, Squish GUI test scripts are cross-platform compatible. This means that the test creation can be done on the embedded hardware or through desktop builds of the embedded application. Create your tests within your local development environment and execute them on embedded hardware.

The post Video: GUI Testing of Embedded Devices appeared first on froglogic.

Using External Tools in the Squish IDE

$
0
0

The Squish IDE supports opening the files shown in the “Test Case Resources” and “Test Suite Resources” with external tools. This works by associating file content types or a file extension (for example “.txt”) with one or more applications, tools, shell scripts, etc.

These associations can be edited at Edit > Preferences > General > Editors > File Associations:

Once an “editor” is added for a file type/extension, the context menu of such files contains a respective entry:

Simple Use Cases

At first glance it does not seem very exciting to be able to open external tools for files with specific file extensions like this – although it may be appealing to use Microsoft Excel or LibreOffice Calc for editing .csv, .tsv and .xls data files.

We could, however, use this functionality to open a tool that extracts a call stack/backtrace from “core” files directly from the Squish IDE. (These are files which may be generated when the Application Under Test crashes. For example with a shell/Python script such as this:

#!/usr/bin/python
# -*- coding: utf-8 -*-


import os
import subprocess
import sys


fn_binary = subprocess.check_output(['file', sys.argv[1]]).split('execfn: \'')[1].split('\'')[0]
fn_backtrace = "%s.txt" % sys.argv[1]

args = ['gdb -ex "bt full" -ex q "%s" "%s" >"%s" 2>&amp;1' % (fn_binary, sys.argv[1], fn_backtrace)]
print "%s" % args
subprocess.Popen(args, shell=True).communicate()

if sys.stdout.isatty():
    os.system('cat "%s"' % fn_backtrace)
else:
    if os.system('gedit "%s"' % fn_backtrace) != 0:
        os.system('geany "%s"' % fn_backtrace)

More Complex Use Cases

However, more advanced uses cases are possible, for example, for workflows similar to what we are already using for verification points in Squish. For those, the workflow usually is like this:

  • Create a verification point.
  • Execute the verification point,
  • In case of a failed verification point, view the differences, and decide whether to adjust the expected results and/or to fix the test execution.

In the last point, we may be using one of Squish’s built-in tools for viewing the differences. For example for a table verification point, the Squish IDE shows the differences in an internal difference viewer. And for an image verification point Squish uses a viewer specialized on showing image differences.

In the case of our own, custom verifications, which read the “expected” data from a file (which – granted – may be an advanced thing) and compare the data to the contents of a GUI control, one would usually log the differences to the test report/results via test.pass(), test.fail(), etc. If the “actual” data found in the GUI control is also written to a file, we can also use an external tool for comparing the “expected” data file to the “actual”/”failed” data file.

And at this point the above mentioned functionality makes it very convenient to open the “expected” and “failed” data file in a tool of our choice (for example text file diff/merge tools like Meld, WinMerge, and others).

Example in the Knowledge Base

An article that shows an example implementation of this can be found in our knowledge base at Custom, File based Verification Points. This is what the user sees after choosing Open With… > View differences as described in that knowledge base article:

At this point one can update the “expected” data (on the left) via copy & paste, or by using the functionality provided by this particular diff tool (arrow buttons pointing left).

The post Using External Tools in the Squish IDE appeared first on froglogic.

StarEAST 2019

$
0
0

This years StarEAST took place from April 28th to May 3rd at the Rosen Hotel Convention Center in sunny Orlando, Florida. Our team was located at booth 20. 

Throughout the whole fair, one of the main topics was Behavior Driven Development (BDD). Discussions ranged from more general approaches to make it easier for testers to write tests derived from requirements to more technical talks of converting BDD tests to scripted tests, API and unit tests. You can read more about BDD in the Squish GUI Tester here

Some of our existing customers used the fair to engage with us in a more personal way. Receiving direct feedback about our products, discussing testing topics and current and future technologies is highly appreciated from our end. 

Check back for more details on upcoming events, including the upcoming Squish Days in Munich this October. 

See you at the next event!

The post StarEAST 2019 appeared first on froglogic.

Using Linux uinput From a Test Script

$
0
0

With UI testing, one may need the Squish API for so-called native functions. The Squish native functions, as well as the mouse* and keyboard* functions, use the by the windowing system provided methods.

Toolkit specific APIs, like mouseClick, may also use native functions but might as well post events within the toolkit event queue. The point is that with the mouse* functions one can move the mouse and control click timings. Hovering the mouse may be necessary before clicking because the object identification may change upon mouse hover.

However, when targeting an embedded Linux device, not using X11 but e.g. Linux framebuffer or Wayland, there aren’t such functions. (For various Wayland compositors there will be a solution coming to Squish soon).

One way to get around this limitation is to create your own input device based on uinput 1. To interact with such an input device, the process implementing the fake device needs to read from somewhere. The easiest solution is probably to let such a program read from its standard input.
I’m going to use a named pipe, where one can write a line of text to and have the device program read the line from the named pipe. One caveat to overcome is that a single write using echo "x y z" > my-pipe will cause a close of the device process standard input. To keep the input open, at least one writer must keep the pipe open. A suggestion is to redirect the standard error of the shell that runs the device program, like this

mkfifo /tmp/input
exec 3>/tmp/input &
sudo ./uinput < /tmp/input

Which means the shell that runs the mouse device program must be kept open. (One could use screen if a permanent ssh connection is a problem).
Note that accessing /dev/uinput likely requires root privileges.

From the test script one can then use the Squish RemoteSystem API, e.g.

var sys = new RemoteSystem();
sys.execute(["/bin/sh", "-c", "echo 'm 50 10' > /tmp/input"])
sys.execute(["/bin/sh", "-c", "echo 'c' > /tmp/input"])

to move the mouse 50 pixels to the right, 10 down and click.

1 Code listing

/* gcc -o uinput uinput.c */
#include <errno.h>
#include <fcntl.h>
#include <signal.h>
#include <string.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <linux/uinput.h>

static void emit(int fd, int type, int code, int val) {
    struct input_event ie;

    ie.type = type;
    ie.code = code;
    ie.value = val;
    ie.time.tv_sec = 0;
    ie.time.tv_usec = 0;

    write(fd, &ie, sizeof(ie));
}

void deleteDevice(int fd) {
    if (fd > 0) {
        ioctl(fd, UI_DEV_DESTROY);
        close(fd);
    }
}

int setupMouse() {
    struct uinput_setup usetup;
    int i = 50;

    int fd = open("/dev/uinput", O_WRONLY | O_NONBLOCK);
    if (fd < 0) {
        fprintf(stderr, "failed to open device %s\n", strerror(errno));
        return;
    }
    /* enable mouse button left and relative events */
    ioctl(fd, UI_SET_EVBIT, EV_KEY);
    ioctl(fd, UI_SET_KEYBIT, BTN_LEFT);

    ioctl(fd, UI_SET_EVBIT, EV_REL);
    ioctl(fd, UI_SET_RELBIT, REL_X);
    ioctl(fd, UI_SET_RELBIT, REL_Y);

    memset(&usetup, 0, sizeof(usetup));
    usetup.id.bustype = BUS_USB;
    usetup.id.vendor = 0x1234; /* sample vendor */
    usetup.id.product = 0x5678; /* sample product */
    strcpy(usetup.name, "Example device");

    ioctl(fd, UI_DEV_SETUP, &usetup);
    ioctl(fd, UI_DEV_CREATE);
    sleep(1);
    return fd;
}   
    
void pointerClick(int fd) {
    emit(fd, EV_KEY, BTN_MOUSE, 1);
    emit(fd, EV_SYN, SYN_REPORT, 0);
    emit(fd, EV_KEY, BTN_MOUSE, 0);
    emit(fd, EV_SYN, SYN_REPORT, 0);
}       

void pointerMove(int fd, int x, int y) {
    emit(fd, EV_REL, REL_X, x);
    emit(fd, EV_REL, REL_Y, y);
    emit(fd, EV_SYN, SYN_REPORT, 0);
}

void sighandler(int i) {
    (void)i;
    close(0);
}

int main(void) {
    int done = 0;
    int fd = setupMouse();

    signal(SIGINT, sighandler);
    signal(SIGTERM, sighandler);
    signal(SIGPIPE, SIG_IGN);

    while (!done) {
        char ev;
        int key, x, y;
        int count = scanf("%c", &ev);
        if (count <= 0 || ev <= 0) {
            printf("count %d ev %d\n", count, ev);
            break;
        }
        switch (ev) {
        case 'c':
            pointerClick(fd);
            break;
        case 'm':
            count = scanf(" %d %d", &x, &y);
            if (count != 2) {
                done = 1;
            } else {
                pointerMove(fd, x, y);
            }
            break;
        }
    }
    deleteDevice(fd);

    return 0;
}

The post Using Linux uinput From a Test Script appeared first on froglogic.


Video: Using pip to Install External Python Modules

TEST Magazine: Squish for Automation of HMI Tests on Embedded Devices

$
0
0

Embedded HMIs have seen tremendous growth in their application within the automotive technology sector, a market valued in 2015 at roughly $16 billion (£12.3). In-vehicle infotainment (IVI) systems and human-machine interfaces (HMIs) are technologies in almost all modern consumer cars, providing capabilities for navigation, multi-media playback and safety-critical systems.

With the rise of modern user interfaces in the embedded industries, new methods to automate GUI tests of embedded devices are required to develop and ship quality products.

Historically, in-vehicle infotainment systems originated as in-vehicle entertainment systems providing audio to the vehicle via radio, cassette players or CD players. Once controlled by knobs on the console or dashboard of the vehicle, today’s IVI systems are operated through complex graphical user interfaces (GUIs) supported by software that is sophisticated and complex, yet aimed at being user-friendly and safe to use.

Automotive consumers of today expect their IVIs to be intuitive and user-friendly, despite the underlying complexities of the HMI. They must be highly responsive to touch-based and gesture- based user-software interactions, and offer a complete and coherent visualisation of the underlying vehicle applications for which the IVI controls and displays.

Lastly, with the advancement of connected cars comes the rise of companion apps, or mobile applications which augment driving and can control functions for these cars, and which need to be tested on a user interface level.

These constraints underscore the need for automated, robust and reliable GUI tests that can effectively and broadly test these embedded devices.

Cue the Squish GUI Tester, a tool for the creation, execution and maintenance of cross-platform GUI tests running on all desktop, mobile, web and embedded platforms.

Squish, with its support of all major GUI technologies, powerful IDE offering, and seamless integration into the latest Continuous Integration platforms, is the tool of choice for thousands of companies worldwide, many of which are in the automotive technology sector.

Squish stands out among the competition for its breadth of GUI technology support, offering editions for both desktop toolkits and embedded toolkits. The Squish for Qt edition allows for GUI test automation on embedded targets with its dedicated support for automated testing of all Qt widgets, QML and Qt Quick controls as well as embedded Qt Webkit and Qt WebEngine content.

So, how can Squish help?

To enable the automation of HMI tests on embedded devices, a dedicated Squish for Qt Embedded SDK & Support package is available. With this package, the minimal necessary components of Squish can be deployed on any embedded system, such as embedded Linux, QNX, WinCE or Android devices.

After this, Squish tools can remotely connect to the embedded components for automation. Due to Squish’s real cross-platform support, tests can be automated against desktop builds of Qt HMIs or in emulated/simulated environments.

These tests can then be run against the embedded target without any changes. For cases where end-stage hardware is not available, and testing must be done in a purely emulated environment, Squish integrates well with Hardware In-The-Loop (HIL) systems which can simulate physical ongoings of the environment in which the embedded device operates, for example, driving.

Squish also provides bindings to the Qt In-Vehicle Infotainment module, which provides C++ classes and QML types for accessing vehicle features, as well as a core API for implementing new IVI features. The bindings allow tests to interact with all vehicle features provided by the module.

Functional Mock-up Interface (FMI) is a tool-independent industry standard for model-based development of systems, in which a real product is assembled digitally through model exchange and co-simulation.

Testing using simulated devices offers key advantages, including removing the need for a physical device; a simulation can mock rare conditions (i.e., device malfunction); and simulation can provide the test environment with its current state.

Squish can test applications that communicate with a simulated device without any knowledge of the simulation. Squish provides support to import and interact with Functional Mock-up Units (FMUs), allowing Squish to interact with simulators, models and other backend components as part of the system test automation.

In such a scenario, the Squish inputs are processed by the application under test, and appropriate commands are sent to the device. In reaction to that, the device changes its state and notifies the application under test. 

Squish can detect changes in the application’s GUI, and confirm a valid response. One such simulator with which Squish integrates is Vector CANoe, a tool which supports the design and development of networks and networked ECUs for simulation, analysis and testing of network communication. 

A practical example would be simulation of braking or acceleration in a passenger vehicle, during which Squish tests that the IVI updates accordingly.

Squish makes it possible to test multiple applications from multiple devices using a single test script. This makes it possible to test the interaction between different applications or between multiple instances of the same application.

What about end-to-end testing?

A practical use case for this feature is in testing companion apps that control an IVI, which are applications that pair with passenger vehicles to augment the driving experience. Squish allows a combination of mobile app testing with other Squish editions to automate end-to-end testing of complex scenarios which involve several frontends and applications running on different platforms and devices.

For many companies, however, it is required to reach some sort of safety certification for their embedded devices present in safety-critical systems. This is achieved, generally, by some kind of quality measurement for the testing. A key indicator to measure the quality of testing is to understand how much of the application’s source code is covered by automated GUI tests.

This is the idea behind froglogic’s product Squish Coco, a complete, cross-platform, cross-compiler toolchain for code coverage analysis of C/C++/C# and QML-based applications. With Coco’s automatic source code instrumentation capabilities, the tool can collect and report coverage data against various coverage metrics.

These include function and statement coverage, decision and condition coverage, and even Modified Condition/Decision Coverage. Supplementing analytic coverage data, Coco also maps source code to a color (e.g., red, pink, yellow, green), with the colors identifying untested code, tested code, dead code, or redundantly tested code.

The Qt Company, known globally for its cross-platform SDK technology used in millions of apps and devices, Qt, uses Coco for reaching safety certification in safety-critical systems in which Qt is present. The Qt Safe Renderer, launched in 2018, enables developers to design and add safety critical UI elements to Qt-based safety-critical systems.

In these systems, the concept of functional safety – minimising risk to humans by detecting dangerous conditions and adjusting for or avoiding them – is critical. In automotive cases for which the Qt Safe Renderer is used, for example in the addition of warning indicators in automotive digital cockpits, reaching safety certification would mean achieving ISO-26262 safety standards, an international standard for functional safety of electrical and/or electronic systems in passenger automobiles.

How does Coco come into play?

Achieving ISO-26262 requires quantifying MC/DC coverage, a metric which Coco can determine and report. To complement coverage data supplied by Coco, Squish comes paired with a GUI Coverage Browser, a tool designed to display data on which parts of the GUI (i.e., not the source code) have been hit by tests.

Presented in an easy-to-use interface, this visualisation tool will colour the GUI elements according to if they have been exercised by existing tests.

So, why froglogic?

froglogic’s Squish GUI Tester is the tool of choice for many leading automotive technology companies because it offers unparalleled support for testing HMIs in any context. From device testing in purely emulated environments, to end-to-end testing of companion apps, Squish provides state-of-the-art solutions for GUI testing in all of today’s connected cars.

Introducing Squish Coco to your development and test processes alleviates redundant testing and augments efficient source code coverage, allowing developers to achieve critical safety certifications for products that are designed from the beginning to keep people safe while driving.

Choose froglogic if you are looking for a product portfolio unmatched in its breadth and depth of transforming your solutions into high-quality, safety-certified and user experience-focused top-notch products.

The post TEST Magazine: Squish for Automation of HMI Tests on Embedded Devices appeared first on froglogic.

Visual Verification Points – Using the VisualVP Editor

$
0
0

Today we will demonstrate some tips and tricks to using the Squish ‘Visual Verification Point Editor’

…and how it can be used by developers and testers even if they’re not using the Squish IDE. Or even if they’re not using the Squish built-in interpreters at all (if using the squishtest Python import module, for example) .

Visual VP’s Can Be Created Easily from Test Scripts

…with no need for using the Squish IDE or even the Squish script interpreter at all (for example, when Squish is used as a Python import module in regular Python scripts).

For creating a Visual Verification Point from a test script which verifies all GUI elements currently visible on the AUT’s main window, simply add the following lines:

...
createVisualVP( "VP1", names.MainWindow )
# test.vp( "VP1" )
...

…and let the test script run. The test now dumps the object hierarchy of the main window, containing any visible controls, as well as screenshots and properties of these controls all together as one single ‘Visual Verification Point’ file into the test’s data folder.

Once the test has run and has recorded the main window in it’s state at that moment when ‘createVisualVP()’ was called, the lines can now be changed to make the test verify on this position in the script:

...
# createVisualVP( "VP1", names.MainWindow )
test.vp( "VP1" )
...

Now, when the test script runs these lines, it will verify the whole main window’s control elements against the just taken dataset stored as ‘VP1.’

Using the Editor from the Command Line

This perhaps sounds a little impractical, (since, when just one property of one control element may change, or when one control element may change position from one software version to another, the complete verification would then fail) but in these cases, the VisualVPeditor can be used for further investigating the failure. By command line, type:

visualVPeditor --expected <VPfile> --actual <pathToResultData> 

The VisualVPEditor will show up. – The GUI hierarchy tree of the verification point data set can be browsed for concrete errors. If used during test development, these verifications can be adjusted (or even disabled) for making the verification pass under such distinct circumstances. These verifications can also be used either just to save to the existing ‘expected’ VP set or to store as a new, different VP set with a different name. This is discussed in the following section.

So different data sets can be derived from that VP (which contains verification for the complete window) which we’ve taken in the first place, which can be stored as modified versions now, having distinct sub groups of element verifications enabled or disabled.

The VP Editor now can be used for ‘tuning’ our VP data. Enabling or disabling of complete, distinct branches of the window’s GUI tree is possible (if there are known fluctuations in properties, for example). What’s also possible is enabling or disabling complete categories of verifications in general, which affects all objects within the VP,  like:

  • (Screenshot Check) disabling / enabling any kind of screenshot verification at all.
  • (Geometry Check) disabling / enabling any kind of positional verification at all.
  • (Hierarchy Check) disabling / enabling any kind of hierarchical verification at all.
  • (Content Check) disabling / enabling any kind of verification of content properties at all.

Also ‘fine tuning’ on a much more detailed grain is possible: like disabling text property verification of some distinct text label down the left corner of the second group box from the right side of the window (because text may be inconsistent and might change at each start of the application, like a  date/time view.) Such things can be ‘blanked’ out from the verification then.

Related sources:

Related articles:

The post Visual Verification Points – Using the VisualVP Editor appeared first on froglogic.

Case Insensitive Matching of Real Name String Properties in froglogic’s Squish

$
0
0

Motivation

In Squish, a symbolic name contains multiple constraints on an object search. These constraints can apply to properties of an object, and compare to object references, strings or numerics. When comparing strings, it is possible for Squish to perform inexact comparisons, using wildcards or regular expressions.

Case insensitive matching is most frequently used in combination with the Windows file systems, which can store filenames using upper and lowercase letters, but do not distinguish between upper and lowercase characters when matching filenames on disk. It is therefore possible to open a file on Windows using different filenames, if they differ only in upper or lowercase. If there are GUI labels based on user input of Windows file names, this can result in slightly different GUI labels in the AUT, and this may cause problems with object identification in Squish test cases.

This article will show how to use regular expressions in Squish real names, to perform case insensitive matching of strings, for identification of AUT objects in test cases.

Regular Expression Constraints

To change a property constraint from an exact string to a regular expression, it can be done programmatically, or from the Object Map Editor GUI by double-clicking on the Operator and selecting RegEx from the drop-down menu that appears.

Object Map Editor, with a regular expression property

After saving and letting the refactor operation perform, one can find the entry for that symbolic name in the names.py file, and see that it now has a property value that refers to a RegularExpression object.

address_Book_MyAddresses_adr_MainWindow = {"type": "MainWindow", "windowTitle": RegularExpression("Address Book - MyAddresses.adr")}

In JavaScript, the entry would be in names.js and would look like this:

export var addressBookMyAddressesAdrMainWindow = {"type": "MainWindow",  "windowTitle": new RegularExpression("Address Book - MyAddresses.adr")};

Unfortunately, when representing regular expressions in the object map like this, flags such as the “case insensitive” flag can not be represented in the entry. This means that case insensitive matching of string properties is a little more difficult than simply matching a regular expression. However, given a string used for identification, it is straightforward to convert that into a regular expression that can match case insensitively, and we can use that in a real name constraint of an object map entry. This article will show you how.

Character Class Meta-Characters

To accomplish case insensitive matching of properties in Squish real names, we will use regular expression meta-characters that specify a class of characters. One of the simpler ones is the [square bracket] meta-character, where a square-bracketed expression can be replaced by a single character, and the possible valid characters are listed between the [square brackets].

For example, a case insensitive regular expression for a string like “heLLO wOrLD” which would match any possible combination of upper-lowercase characters that spell those two words (separated by a single space) would look like this:

[hH][eE][lL][lL][oO] [wW][oO][rR][lL][dD]

A function in Python that, given an alphanumeric string, returns a RegularExpression like the above for it, would look like this:

from objectmaphelper import *
import re

def caseInsensitive(alphaString):
returnValue = ""
alphaString = re.escape(alphaString.lower())
for ch in alphaString:
if ch.islower():
returnValue += "[{}{}]".format(ch, ch.upper())
else:
returnValue += ch
return RegularExpression(returnValue)

A similar function in JavaScript would look like this:

import { RegularExpression, Wildcard } from 'objectmaphelper.js';
var isAlpha = /[a-z]/
function escapeRegExp(string) {
return string.replace(/([.*+?^${}()|[]\])/g, '\$1');
}
function caseInsensitive(alphaString) {
var returnValue = "";
alphaString = escapeRegExp(alphaString.toLowerCase());
for (i=0; i<alphaString.length; ++i) {
var ch = alphaString[i];
if (ch.match(isAlpha)) {
chu = ch.toUpperCase();
returnValue += "[" + ch + chu + "]";
}
else {
returnValue += ch;
}
}
return new RegularExpression(returnValue);
}

Notice that each of these functions also escapes any regular expression special characters to prevent them from being interpreted in an unexpected way when passed onto Squish’s RegularExpression constructor.

Case Insensitive Real Name Properties

Given a symbolic name, with a property such as windowTitle, and a string we want to use for case insensitive identification, we can use this function for the value in the associative mapping that is a Squish real name. With the Squish 6.4 Scripted Object Map, The Python and JavaScript code would look like this:

names.addressBookMainWindow["windowTitle"] = 
caseInsensitive("Address bOOk - mYaDdreSSes.ADR")

While the Perl would look more like this:

$Names::addressBookMainWindow{"windowTitle"} = 
caseInsensitive("Address bOOk - mYaDdreSSes.ADR");

Conclusion

When testing software that ignores upper-lower case of strings, it is sometimes necessary to test using a case insensitive way of identifying objects. This article shows a way one can do this in froglogic’s Squish.

For another tip about improving object names, see this article.

The post Case Insensitive Matching of Real Name String Properties in froglogic’s Squish appeared first on froglogic.

Creating Safer Skies: How skyguide Uses the Squish GUI Tester to Improve Safety and Efficiency of the Swiss Airspace

$
0
0
Air Traffic Control Setup

Skyguide, headquartered in Geneva, Switzerland, is a company with a longstanding history of contribution to the development of Swiss aviation. Today, skyguide provides air navigation services for Switzerland and its adjacent countries to ensure safety of civil and military air navigation through management and monitoring of the Swiss airspace. Providing safe, reliable and efficient air navigation, skyguide manages a record of guiding over 1.2 million flights a year through Europe’s most complex airspace.

“What’s really key for us, having to do end-to-end integration testing and not normally having access to all the source code, is a tool like Squish that can talk to an application on Linux and one on Windows…it provides exactly what we need.”

We sat down with Mr. Duncan Fletcher and Mr. Geoffroy Carlotti, two Test Automation Engineers at skyguide, to learn about their company’s longstanding history of using Squish to test a diverse set of applications. Engineers at skyguide follow a Behavior-Driven Development (BDD) paradigm for their automation efforts. That is, a methodology that centers around stories written in a common language that describe the expected behavior of an application, allowing technical and non-technical users to participate in the authoring of feature descriptions, and therefore tests. The engineers we spoke to are responsible for foundationally defining this BDD framework for other teams within the company, thereby allowing technical and business people to participate in test automation.

“The fact that we’ve reduced the needed framework down to one tool is one reason we chose Squish.”

Engineers at skyguide are no strangers to advanced automation techniques using Squish. In one application they test, described Fletcher and Carlotti, they use a multi-pronged approach of combining localized OCR with localized image search and Windows object recognition. This application, written in C++ and running on Windows, is essentially the flight radar system displayed in front of air traffic controllers. An important detail of this radar system is the algorithm by which a flight shows onscreen. As Mr. Carlotti noted, there are hundreds of rules to take into account to display a flight properly on a radar screen. Even the color of the flight data follows certain rules to avoid drawing attention away from the air traffic controller looking at the screen. One benefit brought by Squish was that the process to test this application became streamlined via automation. “It’s impossible to test all these cases manually, so this is huge,” reported Mr. Carlotti. The engineers noted that, in general, the applications tested within the company are highly diverse, which in turn, made Squish standout to them for its ability to test such a varying set of applications within one framework. 

“Another huge benefit is that [with BDD] there is living documentation.”

Both technical and non-technical project stakeholders benefit from the BDD approach set up by the engineers at skyguide. At a fundamental level, Mr. Fletcher and Mr. Carlotti are developing the BDD framework to be available to both testers and those who write requirements. In this way, each person in the team can view the test results, understand them and react accordingly. A forward looking goal for these engineers is to involve more end-users and business people to interact with the BDD scenarios. That is, approach GUI testing in a way that is holistic in its setup and comprehensive in its involvement of all sides of the business. Mr. Fletcher noted that, while the team still does a good portion of manual tests, skyguide focuses increasingly on automation. As a summary to our insightful meeting, Mr. Fletcher and Mr. Carlotti noted an excellent level of customer support from our technical team, and that the two greatly looked forward to the next major release of Squish, version 6.5.

The post Creating Safer Skies: How skyguide Uses the Squish GUI Tester to Improve Safety and Efficiency of the Swiss Airspace appeared first on froglogic.

Viewing all 398 articles
Browse latest View live