Quantcast
Channel: froglogic
Viewing all 398 articles
Browse latest View live

String Handling in Python 3 Test Scripts

$
0
0

With Python 2 not seeing any further development or bug fixes after January 1st 2020, requests for custom Squish packages containing Python 3 support have gained a lot more traction. Since the next major Squish release will ship with Python 3, we’ll have a short look at Python 3-specific behavior in the context of Squish GUI tests. The most obvious change in behavior is how Python handles strings and how individual characters are represented inside a string.

Strings in Python

In Python 2, handling of strings was rather simple. Script writers usually did not have to take care about where these strings came from (network, files from other machines, etc.) or how their internal representation (also known as “encoding”) would look like. Most of the time, everything would just work, except for cases where it wouldn’t.

In Python 3, the authors of the language acknowledged that strings can come from different kinds of sources. Also, the Python 2 behavior of “working most of the time, but not always” may cause problems that are hard to track down.

The result is that in Python 3, strings now care about this internal representation, details of which would far exceed this article. Thankfully, the Python 3 documentation explains everything in its Unicode HOWTO.

Strings in Squish API

Traditionally, Squish always expected test scripts to be UTF-8 encoded. In fact, the Squish IDE by default creates all text files — and even the recently added Script-based Object Map — in this encoding. For literal strings in single- or double-quotes, this likely means that they are already in a format that is safely read and understood by Python 3. The following will just work fine:

someObject.title = '😀'  # Set a string property to a grinning face emoji

For data from any other source, like network connections or files read inside the test script, one extra step will be needed in Python 3. Most Python functions will provide such data as bytes instead of str. This means they will first have to be converted to str before passing them on to anything in the Squish API that expects a string.

This can be achieved by calling bytes.decode(encoding):

with open('input.txt') as f:
  message = f.read()
type(someObject, message)  # does not work in Python 3 anymore
type(someObject, message.decode('utf-8'))  # works in Python 2 and 3

Strings in GUI Toolkits

In addition to string types in Python, Squish also deals with string types from the GUI toolkit of the AUT. Typical types can be QString for Qt-based AUTs or java.lang.String in the case of Java-based AUTs. Conversion between these string types is usually done automatically inside test scripts. In some cases, however, an explicit conversion from the toolkit string type into the script language string type is needed or is at least helpful:

# Look up a QLineEdit inside a Qt AUT
lineEdit = waitForObject(names.surname_LineEdit)
# QLineEdit.text is a QString
test.compare(className(lineEdit.text), "QString")
# Comparing toolkit strings with Python strings works fine
test.compare(lineEdit.text, "Hello")
# Explicit conversion to Python str is also possible
test.compare(str(lineEdit.text), "Hello")

Summary

String handling of Python inside Squish test scripts does not change a lot between Python 2 and 3. In most cases, the transition between Python versions should be transparent as long as all test script files are properly encoded as UTF-8. For the remaining cases, Python 3 offers functions to convert between bytes and str when necessary.

The post String Handling in Python 3 Test Scripts appeared first on froglogic.


Performance Analysis with Squish Coco

$
0
0

Version 5 of Squish Coco supports the performance measurement of an application with its new, built-in Function Profiler. Like any profiling tool, this consists of providing information on the time consumed by a procedure.

Here, we present how to work with Squish Coco’s profiler using an expression parser example. This article covers compiling the application, narrowing down the performance issues, identifying problematic functions, rewriting the source code and comparing the refactored code’s performance to the original version. Let’s get started.

Application Building

From the code generation, enabling the profiler requires only that you compile with an additional flag to the coverage flags: --cs-function-profiler.

Two options are available:

  • --cs-function-profiler=all: profile all functions;
  • --cs-function-profiler=skip-trivial: skip single statement functions (often getter or setter functions.)

After building, collecting the profiling data is the same as collecting the coverage data. No additional work is necessary to analyze the results using the CoverageBrowser.

For our parser sample, in the instrumentation script, we simply add the switch --cs-function-profiler=all to the COVERAGESCANNER_ARGS flag, and rebuild and execute the tests using:

$ make -f gmake.mak clean parser tests

Analyzing the Performance Issues

Identifying the Test Cases

Using the Executions window of the CoverageBrowser, we see that some tests result in high execution times:


The tests consist of adding new variables. We can see that the time spent in this functionality is increasing exponentially with the number of insertions.

To analyze the situation, we use the execution comparison mode in the CoverageBrowser (File –> Execution Comparison Analysis.) We select the test with 1 000 variable insertions as a reference test, and we compare it to the test with 10 000 insertions.


Identifying the Problematic Functions

Moving to the Function Profiler window permits a more detailed analysis:

Comparison using the Function Profiler window.


This window displays the execution time and call count for the current test with 10 000 insertions and the reference test with 1 000 insertions (columns with the keyword (Reference) in the header.)

Additionally, two additional columns are listed for computing the ratio of the current test to the reference, given by:

ratio = duration of test_current / duration of test_reference

Using this information, it is easy to see that for a test set which is growing by a factor of 10, the execution time of toupper(), Variablelist::add() and Variablelist::get_id are increasing by a factor of over 100. Further, reducing the call to toupper() seems to be necessary because the number of executions is increased by a factor of 99 whereas the others are less than 10.

Now, if we look at the code behind these functions, we see a classic source code example originally written in C: usage of char* and manual handling a list of elements into a table. STL containers are used but just to replace the classic C arrays. get_id() is a function which finds the position of the variable in the table by iterating over the complete table. So, with a complexity linear to the number of allocated items, it could be logarithmic in the case of a binary search. Also there is a call to toupper() after each iteration, which further slows down the algorithm.

/*
 * Add a name and value to the variable list
 */
bool Variablelist::add(const char* name, double value)
{
    VAR new_var;
    strncpy(new_var.name, name, 30);
    new_var.value = value;

    int id = get_id(name);
    if (id == -1)
    {
        // variable does not yet exist
        var.push_back(new_var);
    }
    else
    {
        // variable already exists. overwrite it
        var[id] = new_var;
    }
    return true;
}

/*
 * Returns the id of the given name in the variable list. Returns -1 if name
 * is not present in the list. Name is case insensitive
 */
int Variablelist::get_id(const char* name)
{
    // first make the name uppercase
    char nameU[NAME_LEN_MAX+1];
    char varU[NAME_LEN_MAX+1];
    toupper(nameU, name);

    for (unsigned int i = 0; i < var.size(); i++)
    {
        toupper(varU, var[i].name);
        if (strcmp(nameU, varU) == 0)
        {
            return i;
        }
    }
    return -1;
}


/*
 * str is copied to upper and made uppercase
 * upper is the returned string
 * str should be null-terminated
 */
void toupper(char upper[], const char str[])
{
    int i = -1;
    do
    {
        i++;
        upper[i] = std::toupper(str[i]);
    }
    while (str[i] != '\0');
}

Rewriting the Source Code

The solution here is simple: replace this array through a simple std::map. Here is the updated code:

std::map<std::string, double> var;

bool Variablelist::add(const char* name, double value)
{
    var[ toUpper( name ) ] = value ;
    return true;
}

bool Variablelist::set_value(const char* name, const double value)
{
    return add(name, value);
}

std::string toUpper(const char str[])
{
    std::string upper;
    upper.reserve(NAME_LEN_MAX);
    int i = -1;
    do
    {
        i++;
        upper += std::toupper(str[i]);
    }
    while (str[i] != '\0');
    return upper;
}

Here toupper() is rewritten for std::string and the function add() is only a simple affectation of std::map, which has a logarithmic complexity.

Comparing Results

We can re-execute the tests with the newest version and compare the results with the CoverageBrowser. Here we load the latest version of parser.csmes and compare it with the previous one (Tool –> Compare with…)

By inspecting the Executions window, we see that the speed issue is resolved. The execution time of the tests no longer grows exponentially.


The Function Profiler window further confirms this finding. Squish Coco highlights the modified functions in bold, underlines the new functions and strikes-out the removed functions. Now, by measuring the difference of the execution time (1 minute and 32 seconds), we can see that the performance gain is nearly the same as the previous execution time.


Conclusion

Using Squish Coco’s new profiling extension, it is possible to conduct a post-mortem analysis of the application’s performance. This kind of offline analysis differs from many other profiling tools, which are typically more focused on getting profiling information only on the developer’s computer.

Here it is possible to analyze the complete data set after a complete suite execution. Using the CoverageBrowser, it is then possible to select relevant tests, compare them, and, after code rewrites or refactoring, compare two software versions together. In other words, Coco allows us to execute a suite with the profiling data recorded, which we can archive or analyze later.

The post Performance Analysis with Squish Coco appeared first on froglogic.

Coco 5 Released, With Built-In Function Profiler

$
0
0

froglogic is excited to deliver a major release of its multi-language code coverage analysis toolchain, Squish Coco 5. This release offers a built-in Function Profiler which facilitates conducting a performance analysis based on timing data for function calls associated with a group of test executions. The addition of profiling capabilities to Squish Coco’s already advanced analysis features makes it a 3-in-1 tool for code and test coverage assessment. It offers cross-platform, cross-compiler code coverage analysis; it can analyze your source code based on standard and advanced code metrics; and, with the new Function Profiler, it can measure and assess application performance. The latest version also introduces a number of bug fixes and stability improvements for all parts of Coco.

Function Profiler

The new Function Profiler provides data regarding the frequency and duration of function calls associated with your test executions. To optimize code performance, the Function Profiler helps you to zero-in on high function call counts and long procedure times. This data can then be used to inform how you develop your source code moving forward. Should a specific algorithm be refactored? Have I written-in an unnecessary and avoidable performance slowdown? These questions can now be asked and answered.

The profiling extension further supports a holistic performance analysis through an execution comparison feature. With it, you can compare the timing between two product versions, enabling you to quantify performance effects of new code changes rapidly and easily.

We’ve written a separate how-to blog on using the Function Profiler. In it, we cover compiling the application, narrowing down performance issues, identifying problematic functions, rewriting the source code and comparing the new code’s performance to the original version. Take a look here.

Changelog Highlights

Coco 5 brings additional features and code enhancements to all users of the product. Some highlights include:

  • System-wide license server configuration available for easier deployment.
  • Improved integration between the Squish GUI Tester and Squish Coco for multi-platform Qt-based application testing.
  • Instrumentation of multi-threaded QML applications now possible.
  • Support for ARM DS-5 compilers.
  • Annotations support for C# programs.
  • Resolved compilation issues related to ternary operator, noexcept keyword and lambda functions of C++/C# code.

A complete listing of new features, bug fixes and other improvements are available in the Coco 5 Release Notes.

Download & Evaluation

Customers and existing evaluators can find the Coco 5 packages in their download area. New evaluators are welcome to request a free, fully-supported and fully-functional trial.

Release Webinars

We’re hosting a free, live webinar + Q&A on new features in Coco 5. Join us in your preferred time zone:

Upcoming: Qt Virtual Tech Conference

May 12 – 13th, 2020 | Online

A froglogic engineer will give a talk on Squish Coco at the upcoming Qt Virtual Tech Conference. The live, online event hosted by The Qt Company will feature speakers from diverse industries on all topics Qt, from UI design and development to automated Qt-based application testing, and more.

Our talk will focus on using code coverage analysis to enhance product quality, with special considerations for safety-critical software applications. Registration will go live soon — we’ll check back in with more details.

The post Coco 5 Released, With Built-In Function Profiler appeared first on froglogic.

Basic Usage of Labels in Squish Test Center

$
0
0

Squish Test Center is a lightweight web database which aggregates and analyzes test results generated from Squish GUI tests. This article will introduce three Test Center concepts — batches, reports and labels — to help you better investigate failures in your test outcomes.

Batch and Reports: What Are They?

Squish Test Center views a collection of one or more test results as a report. A batch is a group of one or more reports. Reports can be tagged with and grouped by labels. For example, if the test executions within a report share common factors like product version, operating system and compiler, then a useful set of labels might be:

version=2.5.2     OS=windows10     compiler=msvc17

The Test Center documentation explains these concepts in more depth. Armed with this conceptual knowledge, we can now dive into using labels to investigate failures in our test results.

Labels to the Rescue

To understand how helpful labels are, let’s first have a look at what happens when we don’t set any. The screenshot of Test Center’s Explore page, pictured below, shows a batch where we uploaded test results of two different runs of the same test suite.

One report indicates that the test suite execution encountered failures. Clicking on the failure icon brings us to the Verifications page, which tells the specific test case that failed and even from which line of the test script the failure originated.

The failure occurs in line 8 of the test script:

import subprocess

def main():
    command = "TASKLIST"
    try:
        subprocess.check_call([command], shell=True)
    except:
        test.fail(command + " returned non-zero value")

Finding out exactly what triggered the failure will require more investigation, as it is unclear based on the current information what the issue is.

Now let’s see if tagging the reports with labels provides more information. We’ll again have a look the batch’s Explore page with the same test results uploaded, but this time with labels.

A quick glance reveals that the error is occurring in a test execution tagged with the value “Linux” for the “OS” label. Since “TASKLIST” is a Windows command, the explanation for the failure is clear, and no additional investigation is needed.

Summary

Labels provide information on the context in which tests are run and can help find the root cause of an issue:

The post Basic Usage of Labels in Squish Test Center appeared first on froglogic.

New Product Release: Squish Test Center

$
0
0

froglogic is excited to announce the newest addition to its software quality product portfolio: Squish Test Center.

Built to enhance your development workflow, Squish Test Center is a central, lightweight test result management database which connects test automation with the entire development process.

Squish Test Center: An Overview

Squish Test Center aggregates your automated tests in one place. Its navigable results dashboard acts as a central repository for organizing, monitoring and analyzing your test outcomes across a project’s lifecycle. Offering automatic and intuitive statistical reporting of your test results, Test Center allows you to ascertain the health of your development projects historically and as your application evolves in real-time.

Test Center enables teams to:

  • Organize an unlimited number of test reports based on operating system, compiler or other user-defined filters.
  • Get on top of the latest test failures faster.
  • Quickly identify slow running or flaky tests.
  • Compare two result uploads side-by-side for failure analysis.
  • Achieve traceability with external test and requirements management tools.
Squish Test Center Explore View

Squish Test Center’s Explore Page. Click to enlarge.

A Feature-Rich Database to Achieve Your Development Goals

Squish Test Center offers several key features to support your existing Quality Assurance processes:

Automatic Reporting

Automatic
Reporting

Duration Tracking

Duration Tracking

Traceability

Traceability

Continuous Automation

Continuous Automation

xUnit Framework

xUnit Import Support

Scheduler

Scheduler

Correlation Analyzer

Correlation Analyzer

Anywhere Access

Anywhere
Access


Visit our Features page to learn more.

Coupling with the Squish GUI Tester

Built from the beginning to support automated GUI tests created with the Squish GUI Tester, Test Center couples naturally with Squish.

Uploading your test results to Test Center is usually the first step in getting up and running. You can upload your automated GUI test results interactively from within the Squish IDE, or via the command line using a squishrunner call. Uploads can include attachments like log files and captured screenshots that pair with your results.

The coupling also provides users with the ability to download results from Squish Test Center into their Squish IDE. This enables users to debug failures on their local system using the powers of the IDE. You can: ‘View Differences’ for the various Verification Point types, correct failures using the ‘Use as Expected’ feature, or simply look at the script code conveniently by jumping to different failure locations using the backtrace included in the reports.

Seamless Integrations to Your Favorite Tools

Squish Test Center offers multiple integrations to external test and requirements management tools, Continuous Integration servers and issue tracking and reporting platforms. These integrations enable you to achieve traceability between Squish Test Center and the external system, and allow for automated result synchronization between them. Click on the icons below to learn more about a specific tool integration:

Jenkins Integration
Xray Integration
Zephyr Integration
TestRail Integration
QAComplete Integration
Jira Integration
Confluence Integration
 

Featured Blogs

Check out our Test Center How-To blogs written by the developers who created the product:

Pricing

Squish Test Center’s competitive pricing was designed to accommodate the needs of our customers, whether they be in a small team or part of an enterprise-level operation. Pricing is incremental based on the number of users, with discounts added for larger user groups.

No matter your license package, you’ll have full access to all benefits of Test Center, including unlimited test execution import, advanced statistical reporting and full functionality of any of our 3rd party integrations.

A Test Center subscription includes unlimited support and access to all patch-level, minor and major releases of the product.

Get in touch with us to get a quote for a Test Center subscription that’s adapted to the size of your team and your organization’s needs.

Evaluation

We welcome new evaluators to request a free, fully-supported and fully-functional trial.

Release Webinars

We’re hosting a free, live webinar + Q&A on getting started with Squish Test Center. Join us in your preferred time zone:

The post New Product Release: Squish Test Center appeared first on froglogic.

Testing Web Content Accessibility Guidelines

$
0
0

Web Content Accessibility Guidelines (WCAG) offer a wide range of recommendations for making web content more accessible. Following these guidelines will make content more accessible to a broader group of people with disabilities and, in general, improves usability for all users. WCAG success criteria are written as testable statements that are not technology-specific, although we will explore testing them with an example written with Squish for Web.

aria-label Attribute

There are many accessibility features one could test. We will focus on testing the WAI-ARIA aria-label property. This attribute is commonly used to make page regions of the same type distinguishable, for example if there are multiple navigation landmarks on the same page.

We’ll use a simple HTML website to make the point clear without adding too much mental overhead:

<html>
  <body>
      <div id="topnav" role="navigaton" aria-label="Top navigation">Top navigation</div>
      <div id="bottomnav" role="navigaton" aria-label="Bottom navigation">Bottom navigation</div>
  </body>
</html>

In a web browser, the above HTML renders as:

aria-label Accessibility Testing HTML simple example

Using Squish for Web, we make sure our website conforms to WCAG by:

  • Starting the browser.
  • Waiting for the top nav div to become available and verifying the presence and value of the aria-label.
  • Waiting for the bottom nav div to become available and verifying the presence and value of the aria-label.

The property function in use for verification is HTML_Object.property. More details on this function are located in our documentation.

And our test script reads:

# -*- coding: utf-8 -*-

def main():
    startBrowser("https://download.froglogic.com/support/aria-label.html")

    topNav = waitForObject( {"id": "topnav", "tagName": "DIV", "visible": True} )

    test.verify( topNav.property( "aria-label" ) == "Top navigation", "Top navigation is WCAG conforming" )

    bottomNav = waitForObject( {"id": "bottomnav", "tagName": "DIV", "visible": True} )

    test.verify( bottomNav.property( "aria-label" ) == "Bottom navigation", "Bottom navigation is WCAG conforming" )

The test script above could be extended to employ a reusable, utility function for use across multiple pages. And to note, a typical navigation landmark would contain hyperlinks, while we’ve used a simpler, plain text example.

Summary

Squish’s introspection capabilities allow you to increase automation coverage in the context of accessibility testing. Accessibility testing, like in the example shown here, helps assistive technology users more easily navigate a web page, and it improves site usability for all end-users.

The post Testing Web Content Accessibility Guidelines appeared first on froglogic.

Custom Test Result Reporting Using Log Levels

$
0
0

Categorizing log output into different levels allows you to decide whether you want to have more or less detailed messages in your test results. Squish does not offer a ready-made function for different log-levels, but you can easily create this functionality.

We’ll use the test.log(message, detail) Squish API which allows custom log messages. To add the desired functionality, we need to override the test.log function and use the detail parameter for finding out which log level is set for each test.log call. Using Python, this would like something like the following:

def overrideTestLog():
     # Store a reference to the original function
     test.originalLog = test.log
     test.log = myTestLog

We wrote an earlier blog about overriding Squish API functions. Take a look here for more information.

In order to execute this code, we need to define the myTestLog function which does the main job:

def myTestLog(message, detail=None):
     if detail is None:
         # Just to find log statements which haven't set the log 
         # level correctly yet
         test.warning("log statement --" + message + "-- has no detail")
     elif detail <= logLevel:
         test.originalLog(message)

The above script code does the following:

  • In case there is no detail set, a warning message will be added in the test results.
  • In case your set level is equivalent or lower, the message will be printed in the test results.

Note: By changing "<=" to "==", you can choose to issue a log message only for that particular level.

An example test case shows the complete code:

def main():
    global logLevel
    logLevel = 2
    test.log("Loglevel is set to: " + str(logLevel))
    
    overrideTestLog()
    test.log("Loglevel 3 Test", 3)
    test.log("Log message without detail")
    test.log("Loglevel 2 Test", 2)
    test.log("Loglevel 1 Test", 1)
    test.log("Loglevel without detail")

def myTestLog(message, detail=None):
    if detail is None:
        # Just to find log statements which haven't set the log 
        # level correctly yet
        test.warning("log statement --" + message + "-- has no detail")
    elif detail <= logLevel:
        test.originalLog(message)

        
def overrideTestLog():
    # Store a reference to the original function
    test.originalLog = test.log
    test.log = myTestLog

The value of the global variable logLevel should be gathered from your test data.

You can run this script without any additional settings. Simply create a new test case, and copy the code into the freshly created test case. Doing so, your reporting will look like this:

Example Squish test result and log output when setting different log levels.

The post Custom Test Result Reporting Using Log Levels appeared first on froglogic.

Squish Success at The Qt Company: GUI Testing Qt Creator

$
0
0

Squish is the best possible solution for us…we didn’t see any other tool we could’ve chosen instead.”

Robert Löhning, Senior Software Engineer, The Qt Company

The Qt Company offers innovative tools for the rapid design and development of complex User Interfaces. One such tool is the Qt Creator IDE, an advanced, integrated development environment for creating modern apps on desktop, mobile and embedded platforms. 

We sat down with Qt Creator’s Quality Assurance team to talk about how they’re transforming tedious manual tests into cross-platform, automated regression tests using the Squish GUI Tester

Read about The Qt Company’s multi-pronged approach to bringing high-quality tools to market in our latest success story:

The post Squish Success at The Qt Company: GUI Testing Qt Creator appeared first on froglogic.


Integrating Coco Code Coverage with Unit Test Frameworks

$
0
0

Increasing demands on the quality of software applications has bolstered the need for adequate, thorough testing, including, but not limited to, at the unit level. Integrating code coverage analysis tools with your unit test framework gives a clear sense of the quality of your tests: the grade to which the code has been “hit” by your tests, how often or how little a part of the code has been executed, and what the coverage looks like as your testing efforts evolve.

Squish Coco offers a flexible, open approach to the integration with unit test frameworks. In principle, every unit test framework is supportable.

Custom Unit Test Frameworks

We will demonstrate Coco’s ability to integrate with a generic unit test framework. In this example, we use the C++ programming language. As a first step, let’s understand how the integration works.

As a code coverage tool, Squish Coco is inherently not aware (without additional help) of which parts of the code are tests and which parts are not, because unit tests are viewed as parts of the code from Squish Coco’s perspective. Therefore, some initial work is required.

In order for Coco to distinguish a test from a) other tests and b) other parts of the code, we must provide Coco with the name of the test and clearly state: “this is the beginning of the test”, “this is the end of the test”, and “please save the coverage results of the test.”

We do this by calling some special functions:

 __coveragescanner_testname("sometest");

…To provide Squish Coco with the name of the test.

 __coveragescanner_teststate("PASSED");

Or,

 __coveragescanner_teststate("FAILED");

Or even,

 __coveragescanner_teststate("UNKNOWN");

…To provide Squish Coco with the information on whether the test was successful or, if for any reason, the test result remains unknown.

Next, we effectively say that the test has begun, with:

 __coveragescanner_clear();

And that the test has ended with:

 __coveragescanner_save();

These last two functions are automatically generated and injected into your code if Squish Coco is enabled.

Putting the pieces together, we have the following for a particular test:

SOMETEST_FUNCTION()
{
#ifdef __COVERAGESCANNER__
      __coveragescanner_clear();
      __coveragescanner_testname("sometest");
#endif
      // Here comes the testing part, checking if some integer 'i' is even
      if ( i % 2 == 0 ) {
            __coveragescanner_teststate("PASSED");
      } else {
            __coveragescanner_teststate("FAILED");
      }
      // and we are saving the results...
#ifdef __COVERAGESCANNER__
      __coveragescanner_save();
#endif
}

#ifdef __COVERAGESCANNER__ … #endif is a necessary part which keeps the test functioning without Squish Coco (or with Squish Coco disabled.)

A General Approach

Let’s assume we have some framework with an API similar to Google Test. Let’s further assume that in order to use it, we must include the framework.h header file.

We have our main.cpp file as in the following:

#include "test.h"
#include "framework.h"

int main(int argc, char** argv)
{
#ifdef __COVERAGESCANNER__
       // initialize CoverageScanner library
       __coveragescanner_install(argv[0]);
 #endif
       InitFrameWork(&argc, argv); // init our test framework
       return RUN_ALL_TESTS(); // run all tests listed in the project
} 

Our test.h file:

#ifdef __COVERAGESCANNER__
#define COCOTESTBEGIN(NAME) __coveragescanner_clear(); \
__coveragescanner_testname(NAME); \
__coveragescanner_teststate("PASSED")
#define COCOTESTFAILS() __coveragescanner_teststate("FAILED")
#define COCOTESTEND() __coveragescanner_save()
#else
#define COCOTESTBEGIN(NAME)
#define COCOTESTFAILS()
#define COCOTESTEND()
#endif

And test.cpp, as in:

#include "test.h"
#include "framework.h"

 TEST(TestCaseName, TestName1)
{
       COCOTESTBEGIN("testName1");
       EXPECT_TRUE(true);
       // HasFailure() returns true if the test fails in full or in part   
       if ( HasFailure() ) COCOTESTFAILS();
       COCOTESTEND();
}

 TEST(TestCaseName, TestName2)
{
       COCOTESTBEGIN("testName2");
       EXPECT_EQ(1, 1);
       if ( HasFailure() ) COCOTESTFAILS();
       COCOTESTEND();
}

 TEST(TestCaseName, TestName3)
{
       COCOTESTBEGIN("testName3");
       EXPECT_TRUE(true);
       EXPECT_TRUE(false);
       if ( HasFailure() ) COCOTESTFAILS();
       COCOTESTEND();
}

Thus, we have three macros defined: COCOTESTBEGIN, COCOTESTFAILS and COCOTESTEND. The rest is up to you. If you keep the file test.h exactly how it is and ‘#include’ it in compilation units (.cpp files) accordingly, it will be suitable as well for any other framework.

This also works without Squish Coco (or with Squish Coco disabled).

Please do not forget to use a framework’s function similar to HasFailure() which must return ‘true’, in case the test somehow fails (in full or in part), and ‘false’, otherwise. In this example the third test fails, but only partly. Regardless, this will be recorded in Squish Coco as a failure.

Further Reading

Our documentation outlines configurations for certain popular unit test frameworks, like Google Test and CppUnit.

The post Integrating Coco Code Coverage with Unit Test Frameworks appeared first on froglogic.

Identifying Dead Links on Websites

$
0
0

A common problem with the maintenance of large websites is “dead” links that end up pointing to a nonexistent destination because something has changed somewhere. In particular, these include links to external resources which may easily become stale without anyone noticing until a customer attempts to follow the link.

There are various ways to check if a given URL works or not. That is, if the corresponding server returns a successful result or reports a failure. One challenge when wanting to do this for a complete website, especially with dynamic content, is to obtain that list of URLs to check for. Reading the HTML response that the server sends is often not sufficient with modern websites, as a lot of content is built dynamically through dynamic components like JavaScript code.

In the following sections, we present a solution to these two issues using Squish for Web to access the rendered website inside the browser and extract the links, as well as performing the actual link check.

Gathering Linked Resources

There are three basic tasks involved in a link checker:

  • Gathering all links on a given webpage.
  • Checking each link to verify if it is reachable.
  • Loading the linked resources and repeating the process.

The first step can be implemented using the XPath support in Squish for Web. Once a given URL has been loaded into the browser by assigning to the Browser Tabs URL property, the test script can use an XPath expression, like .//A, to gather a reference to all link objects, and extract the resource they point to.

Since Squish loads the page into a browser, all dynamic content will be executed, and hence the website will appear just like the site visitor would see it. Thus, gathering of the links as shown in the following snippet also captures such dynamically generated links. The example loads the URL into a browser, and then uses the XPath expression to generate a list of URLs the page points to.

def checkLinks(starturl, verifiedLinks, doRecursive):
    activeBrowserTab().url = starturl
    body = waitForObject(nameOfBody)
    links = getAllLinkUrls(body)

def getAllLinkUrls(startobject):
    links = startobject.evaluateXPath(".//A")
    urls = []
    for i in range(0, links.snapshotLength):
        urls.append(links.snapshotItem(i).property("href"))
    return urls

Verifying Resource Availability

Once all links have been gathered into a list, each one can be checked to verify that it loads a proper website. A loop iterating over the list of links and loading each one into the browser in the same way that the checkLinks function does is quick to write. However, with further experimentation, it becomes clear this is not sufficient:

One important feature for a tool that verifies connectivity of links is the ability to provide a report on those links that failed to work. In the below code snippet, the verifyLink function does this by looking at the content of the BODY element which usually contains a textual description of the error. Since it may happen that a given link does not point to an HTML page, but, e.g., some PDF document or other, verifyLink catches the LookupError exception raised by waitForObject and considers the link as working in this case, too.

In case the URL is not reachable, the simple HTTP server used for the example test suite will generate an HTML page that contains details about the error. This is used by the verifyLink function to determine the case of the link not working. With other websites, this check may need to be adapted.

In case of a link not working, the verifyLink function records the link and the error text, and the loop collects all problems and reports them to the caller.

Some additional book-keeping is necessary to ensure that each link is visited only once, and to collect a list of working links on which the check also needs to be run later on. The first part is achieved by keeping a set of links in the verifiedLinks variable. The second part is achieved with a small helper function shouldFollowLinkFrom (covered in the next section) and the linksToFollow list that records those links.

def checkLinks(starturl, verifiedLinks, doRecursive):
...
    links = getAllLinkUrls(body)
    linksToFollow = []
    missingLinks = []
    for link in links:
        if link in verifiedLinks:
            continue
        verifiedLinks.add(link)
        linkResult = verifyLink(link)
        if linkResult is None:
            if shouldFollowLinkFrom(starturl, link):
                linksToFollow.append(link)
        else:
            missingLinks.append(linkResult)

def verifyLink(url):
    activeBrowserTab().url = url
    try:
        body = waitForObject(nameOfBody)
        txt = str(body.simplifiedInnerText)
        if "Error response" in txt and "Error code" in txt and "Error code explanation" in txt:
            return {"url": url, "reason": txt}
        return None
    except LookupError:
        return None

Recursing Into the Found Links

Once all links on a given page have been checked, it is usually necessary to follow those links, load the corresponding page and, further, check the links on those pages. This continues recursively until some initially set condition is reached that stops the recursion.

One example condition would be to stop once no new links are found and only follow links to pages provided by the same server. The basic idea here being that a website is usually provided by a single server and all links that point to external resources are not the responsibility of that website development team anymore. So, it would be sufficient if the links to those external resources work, but it is not necessary to check if those pages themselves contain broken links.

The check if a given link leaves the website (i.e., is an external resource) can be done in many ways, and often depends on how the website is created. In the shouldFollowLinkFrom function below, we chose a comparison of the network location part of the page currently being checked and the link URL. For this example, this is sufficient. For more complex websites, it is easy to extend this function with additional logic.

def shouldFollowLinkFrom(starturl, link):
    return urlparse(starturl).netloc == urlparse(link).netloc

The actual recursion is done after all links of the current URL have been checked. Each link that should be followed is passed to the same checkLinks function again in turn. On each iteration, the list of missingLinks is extended so a complete report of all visited links can be generated after the recursion ends.

...
            missingLinks.append(linkResult)
    if doRecursive:
        for link in linksToFollow:
            missingLinks += checkLinks(link, verifiedLinks, doRecursive)
    return missingLinks

To make the failed link checks visible, the test uses the reporting functionalities from Squish, for example test.fail. This leads to a concise report that includes all the information the test has for the links, like in the following screenshot:

Sample report for connectivity check

Further Development

With the example code shown above, a simple, yet effective and extensible method for verifying the connections of a website to other parts of itself or the internet has been implemented. There are, however, many thinkable extensions of the code, including:

  • Improve synchronization so links are read only once the page is fully settled. This is particularly challenging with modern websites that lazily load most of their content. One way to do this would be to implement a synchronization point that waits for the number of links to be stable over a certain time span, indicating that no new links were added by lazily-loaded content.
  • On more complex websites, the shouldFollowLinkFrom logic likely needs to take into account that pages from different sub-domains are considered part of the same website appearance.
  • The current implementation merely finds links that are not working. It is still up to the website developer to determine where those have been used. A potential improvement would be to store the URL of the page(s) that use a particular link and include that in the report. That would make it easier to repair the website links.
  • Based on the previous idea, it might be interesting to visualize the connection between pages. This can be done by generating a text file using the dot language which can be used to generate a graphical visualization.
  • When executing the example test suite, it becomes evident that for a large site, some way of parallelizing is needed. This could be done by having the script work on a queue of links to check in parallel and distribute this work onto multiple systems which have a squishserver running. The queue would then periodically check each system for being done with loading and obtain the result from the page as well as the links. So, pages loading slowly would not hold up test execution.

Try It Yourself

The code snippets and a simple test website are contained in a small Squish test suite. The website can be made available by opening a command/terminal window, changing to the samplepage subdirectory in the test suite and running Python’s SimpleHTTPServer. The Squish installation ships with a Python that contains this module, so if Squish is installed for example in C:\squish and the example suite has been extracted to C:\suites\suite_website_connectivity, the commands to invoke in a terminal would be:

cd C:\suites\suite_website_connectivity\samplepage
C:\squish\python\python.exe -m SimpleHTTPServer

The post Identifying Dead Links on Websites appeared first on froglogic.

Qt Virtual Tech Con 2020 Squish Coco Q&A

$
0
0

We had a great time presenting our talk, Using Code Coverage to Enhance Product Quality, at this year’s Qt Virtual Tech Con hosted by our friends at The Qt Company. We introduced our code coverage tool, Squish Coco, and discussed ways to improve testing efficiency for both your development and QA teams. We received lots of great questions during the Q&A portion, some of which we could not answer within the time limit. The complete Q&A is given here:

Integrations & Support

Are GitLab CI and GitHub Actions supported for Continuous Integration?

Yes. While we have documented instructions for CI tool integrations with Jenkins and Bamboo, generally speaking, integrations with other CI systems, including GitLab CI and GitHub Actions are supported with a set of command line tools

Our support team is available to assist your team with integrating your chosen CI system with Squish Coco.  

Will Coco integrate with my unit test framework “X?”

Coco was built with an open, flexible approach in mind for integrating with unit test frameworks. While we have documented setups for popular frameworks like CppUnit, Qt Test and Google Test, virtually any generic or atypical framework can be supported. 

We’ve written a blog explaining how to integrate your generic-type unit test framework with Squish Coco. Read it here.  

Does Coco work with gmock?

gmock has been absorbed into the Google Test framework. Coco includes integration support with this framework.

How does Coco work with CMake + Google Test?

Our documentation includes instructions for using CMake and Google Test. The two can be used together without issue.

Performance

What are the project size limits which Coco can handle?

While there is no definitive answer on the size limits of projects which Coco can manage, we have seen Coco used successfully with applications built with millions of lines of source code. 

In terms of runtime performance, are instrumented builds still debug or non-optimized?

It is up to you to compile your build in debug or release mode. With Coco, you can directly use an optimized build, which will run faster than a debug build. Coco provides accurate, reliable coverage information for either case. 

Our benchmarks show a 10% to 30% impact on runtime performance with instrumented builds.

Testing

Can we measure test coverage for tests written as standalone Python scripts which call application functions using some API? How should we instrument the code, then?

Yes, given your project is written in one of Coco’s supported languages (e.g., C/C++, C#). Instrumentation is handled using the CoverageScanner, as in other projects. 

Should we aim for a high coverage from unit tests alone? Is it common practice to include more “heavy-weight” integration tests in our coverage reporting?

Aiming for high coverage from unit tests is a good goal, but attention should be paid to the quality of the tests themselves. Unit testing, while fundamental, does not provide a complete picture of product quality, even if high coverage is achieved with unit tests. Unit tests can tell you that the code is working as development intended, but may miss customer requirements discovered during more “heavy-weight” integration testing, like exercising the GUI. A multi-pronged approach to testing, including a review of the quality of tests to prevent “automation blindness”, is recommended. 

Does Coco support blackbox testing?

Yes. In teams where source code security prevents sharing the code between all developers and QA team members, Coco provides the ability to create a ‘blackbox’ instrumentation database. The database can be shared safely with any member of the team (or even outsourced testers), because there is no possibility to view the application’s source. Further, QA engineers are still able to view the coverage of their tests and manage their executions, before passing their reports to development to merge the reports into one global coverage report.

Check out our blog for a how-to on blackbox testing with Coco.

Tool Qualification & Safety-Critical Applications

Does froglogic support Tool Qualification for IEC 61508 and IEC 62304?

Yes. We offer Tool Qualification Kits for the following standards:

  • ISO 26262: Road Vehicles – Functional Safety
  • EN 50128: Railway Applications
  • DO 178C: Airborne Systems
  • IEC 61508: Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems
  • IEC 62304: Medical Device Software – Software Life Cycle Processes
  • ISO 13485: Medical Devices – Quality Management Systems

Miscellaneous

What is the difference between Squish Coco and gcov/LCOV? Are there metrics that Coco can retrieve that gcov can’t?

  • gcov’s coverage level support is limited to statement and branch coverage, whereas Coco supports condition, MC/DC and MCC coverage, in addition to statement and branch coverage. 
  • gcov does not produce reliable coverage results for optimized builds.
  • LCOV, gcov’s graphical frontend, creates HTML pages displaying the source code annotated with the coverage information. Coco, on the other hand, not only can produce detailed HTML reports to aid in analysis, but Coco’s frontend user interface program, the CoverageBrowser, offers a fully-functional GUI for interactive code coverage data analysis, annotated source code views, test execution status and timing, and much more. 
  • gcov works only on code compiled with GCC, whereas Coco has more extensive compiler support.
  • Coco records a code coverage report from each unit test or application test separately. This allows the selection and comparison of individual tests.
  • Coco supports Test Impact Analysis (also referred to as Patch Analysis), an optimization method used to determine which tests exercise a specific code change (e.g., a last-minute patch.) Using this analysis, you can run only those tests which exercise the change, thus improving testing efficiency under a time constraint where risk assessment for the patch needs to be conducted.

Try Out Coco Today

The post Qt Virtual Tech Con 2020 Squish Coco Q&A appeared first on froglogic.

GUI Test Automation – Benefits & Challenges

$
0
0

To keep pace with the ever-increasing demand for higher software quality, full end-to-end testing of software products has become a common practice. By exercising the application via the graphical user interface (GUI), testers assume the position of the user which yields a lot of benefits. For example, GUI tests exercise a huge portion of the applications’ source code with relatively little effort. GUI testing lets testers get a lot of ‘leverage.’

However, as applications become more powerful and complex, so do their GUIs. Manually testing every piece of functionality is tedious at best. Hence, many software projects consider automating at least some of their GUI testing efforts – and rightfully so.

There are plenty of good reasons to automate GUI tests: human testers are freed up to do things only humans can do, tests are executed more quickly, they yield reproducible results and more. Alas, there is no free lunch. Getting a clear understanding of the desired behaviour or identifying test cases which are suitable for automation are only two of the challenges you will face. There are many factors to consider when trying to decide if, and which, a GUI test should be automated. This article discusses the most important ones and tries to help you with the decision.

Benefits of GUI Test Automation

Liberating Testers

First and foremost, test automation does not replace human testers. Instead, test automation supports human testers. By automating tests, human testers are relieved of executing mundane tasks. Instead, they can concentrate on verifying behavior which a machine cannot verify easily.

Instead of grinding through simple test cases, human testers can work on quality assurance tasks such as:

  • Usability testing to find complicated or hard to use user interfaces.
  • Exploratory testing which leverages tester experience to look for defects in specific areas of the application.
  • Analyzing test reports to identify patterns which hint at the root cause of failures.

Speed Matters

Speed truly matters when it comes to testing. Slow test execution means that you cannot execute the tests as often. In fact, slow test execution often means that you don’t want to do it as often. Hence, GUI testing of non-trivial software projects is often conducted once a week. Or, even worse, shortly before reaching critical milestones such as a public release.

Executing tests more quickly may seem like a ‘nice-to-have’ thing. Test execution speed has significant consequences for all teams in a software team though.

Speed Matters

Defects are found more quickly when executing tests more often. Ideally, only little time has passed since the last successful test run. Hence, changes to the application between test runs are minimized. Often enough, developers still have the changes they applied in the back of their mind. This makes fixing defects much cheaper.

Fixing defects shortly after they were introduced avoids regressions. By avoiding surprise regressions, the progress at which a project nears completion is much more linear:

More accurate estimates through faster test execution

Note how the green line has multiple smaller bumps whereas the blue one has fewer, bigger bumps.

A more linear progression makes accurate time estimates a lot easier – good news for project managers, the marketing department, and customers!

Reproducible Results

People are smart. Sometimes, too smart for their own good. Human brains are hard-wired to find new, shorter, more interesting ways to perform a repetitive task. This ability is excellent for creative work, such as exploratory testing.

Execution of regression and GUI tests requires other qualities, however. It’s important to be able to reproduce the test reports. This enables testing bug fixes: does the report still contain a failure in the new build? It also permits assessing the quality over time. In order to get comparable results, you need to perform the same test steps. The exact same test steps. Every. Single. Time.

This can be mind-numbingly boring. Hence, after ten, 50, 100 executions of the same test case, the cleverness of humans kicks in. They use a keyboard shortcut instead of multiple mouse clicks. Maybe they start to skip a little step. After a while, they might omit entire test cases since

There is no possible way this test case will fail. There was no change to the application in this area. And besides, it never failed before!

Don’t buy it.

Quality assurance requires a high amount of diligence and endurance. Both are qualities which computers excel at. An automated test case guarantees comparable results. The compiler will perform the exact same steps, every single time. It will never take any shortcuts, and it will never silently skip tests. Assuming that you have a stable test setup, the only thing changing between test runs should be your application. So if a test case fails in one test report but passes in the previous report, you can tell that it must have been caused by a change in the application.

Precise Expectations And Requirements

Manual test execution builds on the intelligence of human testers. Test specifications and test plans can be left vague. Of course, any diligent QA team will try to express any test case in clear and unambiguous terms. As it happens though, when churning out hundreds or thousands of test cases, not all of them will be equally precise.

In many cases, being a little lenient causes no harm. When a test fails however, it can become a major source of frustration. Nobody likes delaying a release and sitting through a subsequent meeting in which unclear test expectations are discussed!

Automating GUI tests does away with this lack of precision. A computer, unless explicitly told otherwise, permits no ambiguity. Expected outcomes must be expressed clearly. This may seem cumbersome at first. However, it greatly improves the overall consistency of the test suite. Even after writing a thousand test cases, the 1001st test case still needs to be precise in its description about what is done and what is expected.

Note that precision is not the same thing as accuracy though. Precision is about ambiguity (or lack thereof) in statements. Saying “It’s a 2 kilometer walk to the next gas station” is less precise than “It’s a 2408 meter walk to the next gas station”. Accuracy is about correctness. It’s about whether you’re walking in the right direction! Hence, even though test automation enforces precision, you still need to make sure that the expected outcome is accurate!

Challenges with GUI Test Automation

Any software project seeking to automate GUI tests will eventually face common challenges. Some may be specific to the individual organization. Others are commonly faced by any software project.

Clear Understanding of Desired Behavior

Computers need to be told exactly what the expected behavior and state is. This is a highly positive aspect of test automation. However, existing test cases may not be as precise in their current form. The expected behavior may be implicit and rely on a tester’s judgement. In such cases, reviewing test cases may be required. Typically, the need for review becomes apparent when implementing automated test cases.

Furthermore, there are some tests for which there is no clear and definite description of the desired outcome, such as

  • In exploratory testing, there are no clearly pre-defined test cases at all. There may be rough ideas on what to exercise. However, for the most part, the system is tested on the fly.
  • During usability testing, users who are (or at least represent) real users interact with the system. Do they get lost while doing so? Can they complete the tasks they are trying to do? Is using the system a positive experience? Such highly subjective assessments of a system cannot be encoded into an automated test case easily.

These examples show that there are plausible tests via the GUI for which GUI test automation is not the best fit.

Test methodologies like behavior-driven development & testing can be of great help here. Domain experts can clarify the desired behavior in a free-form language which can then be augmented with logic driving the UI by a tester.

Initial Investment

Any form of testing requires some up-front investment, no matter if it’s manual testing or automated testing. Test cases need to be written, and test plans need to be laid out. Manual testing permits a rather free-form description of test cases. This permits some convenience when writing test cases at the risk of losing precision.

GUI test automation incurs an additional one-time overhead. QA teams need to decide on a suitable test automation tool. Testers need to familiarize themselves with the tool. It may be necessary to port test cases to a format suitable for consumption by the tool.

Automated GUI Testing requires less effort over the lifetime of a product

Tests are executed for the entire lifetime of the software product. In particular, tests are typically executed a lot more often than they are modified. Thus, the long-term benefits of faster (and more reliable) test execution soon outweigh the up-front investment. The effort associated with introducing test automation into a project amortizes over the entire lifetime of the product.

Identify Test Cases Suitable for Automation

Not all test cases verify behavior which is easily accessible to a computer. Such test cases likely do not lend themselves to test automation.

Imagine the ‘Print’ functionality of an application. Technically, a full end-to-end test would need to observe the printer. It would need to check that the paper displays the intended output. While it’s possible to do so with sufficient (hardware-) effort, it’s most likely impractical for many cases. Instead, it’s perfectly sufficient to have a human tester verify this functionality.

Successful test automation efforts concentrate on the low-hanging fruit. Many projects define a large number of relatively simple test cases. Testers often find them to be boring and will greatly appreciate if a computer can take over the work. Aiming for 100% GUI test automation is most certainly not sensible from an economic point of view.

To address this challenge, make sure to select an appropriate test automation tool. It’s useful — but not sufficient — to perform image-based testing. The test automation tool needs intimate knowledge of how the application under test is constructed. That way, it can access individual objects and perform object-based testing.

Automation Blindness

It’s well possible to automate too much. Automation can be such a time saver that testers end up automating everything. This can lead to “automation blindness” – testers don’t question whether a test case is necessary at all.

As a result, test cases tend to accumulate over time. This results in ever increasing test suites with more and more test code to maintain. Executing test suites takes longer, too – jeopardizing the benefits gained by faster test case execution! A classic example is test case overlap: over time, as the application under test changes, test cases may exercise very similar (or even the same) functionality. This makes test suites run slower at very little gain.

To fight automation blindness, it’s imperative to review the coverage of GUI tests constantly. A good code coverage tool is highly recommended for this. In the best case, it integrates tightly with the GUI test automation tool. That way, you can easily review the code coverage per GUI test case as well as identify test cases which provide no or only very little extra benefit.

Conclusion

GUI test automation is a major improvement for all but the simplest software development projects. Implementing a test automation early in the process maximizes the time over which the initial one-time investment amortizes.

GUI test automation enables frequent, fast and repeatable test runs, which help with

  • spotting regressions faster
  • making defects easier and cheaper to fix
  • simplifying project management by reducing the risk of regressions discovered very late in the process

Designing test cases with automation in mind increases the leverage of the tool, liberating the QA staff to concentrate on higher-level tasks which only humans can perform.

It is extremely important to select a proven industry-strength test automation tool by a dedicated provider. Test automation is a very long-term process – good technical support, continuous and timely updates as well as comprehensive documentation are critical. The IT landscape changes constantly, and so do users’ expectations. Hence, not only the applications need to adapt – the testing tool has to follow suit.

The post GUI Test Automation – Benefits & Challenges appeared first on froglogic.

Managing Multiple Squish Editions on a Single System

$
0
0

A common approach to testing Applications Under Test (AUTs) using different UI technologies is to have multiple editions of Squish installed. However, by default, each Squish IDE uses the same Eclipse workspace directory, so they can not be run side-by-side.

As a pre-sales engineer, I tend to switch between Squish editions often, but demo an AUT with the same name in each of them. For this reason, it makes sense for me to have separate server settings as well as separate workspaces for each Squish edition I am using.

For a customer who has to test multiple AUTs, multiple workspaces make it convenient to record script snippets on both AUTs without clobbering the workspace of each other.

This article explains different ways to start up Squish so that you can run multiple editions side-by-side.

Settings Directory

Squish stores its settings in a directory that is different depending on the platform. On Windows, you can find it in %APPDATA%\froglogic\Squish, while on Linux or MacOS, it can be found in ~/.squish. It is possible to use a different location by setting an environment variable, SQUISH_USER_SETTINGS_DIR.

What kind of information is stored in the Squish settings directory? Files located in the ver1 directory: paths.ini (information about global script directories and other paths) and server.ini (server settings, which contain information about AUTs and their locations, and global script directory.)

By default, the workspace directory is stored under this location in a workspace-ver-1 directory.

Workspace Directory

Eclipse, and thus also the Squish IDE, stores information about the IDE settings and workspace in a folder that is called a workspace directory. This contains information to restore your windows, as well as your projects (test suites) to the state they were in when you quit or switched away from them, as well as other persistent settings. Due to file locking restrictions, two concurrent processes of Eclipse can not share a workspace, and this is why by default, running two Squish editions side-by-side is also not possible without changes.

The workspace directory can be customized by passing a -data argument to squishide from the command line.

Since the default workspace directory is under the User Settings directory, if SQUISH_USER_SETTINGS_DIR is customized, the workspace directory will go under this location, too.

This approach is recommended when you have AUTs with the same name used in different editions of Squish.

Shared Server Settings, Separate Workspace

Using squishide -data tells the IDE to use a different workspace directory. If we do this for the different editions of Squish, we can run them side-by-side as well. In this case, they share the same Squish server settings.

Use this approach if the AUTs in different editions all have different names.

Example Linux Startup Script

This script, ‘squishjava‘ will set up Squish for Java to use a separate user settings directory on Linux. Changing into the SQUISH_PREFIX directory before running squishide is optional.

export SQUISH_PREFIX=/usr/local/squish/squish-for-java-6.5.2
export SQUISH_USER_SETTINGS_DIR=~/.squish-java
cd $SQUISH_PREFIX
./squishide

Example Windows Startup Batch File

This file is called ‘squish-win.bat‘ and is used to start the Squish IDE in a similar fashion for Squish for Windows. It also passes a -data switch to squishide, which tells the IDE to use a different directory for the Eclipse workspace and settings.

@echo off
set SQUISH_USER_SETTINGS_DIR=%APPDATA%\Froglogic\Squish-Win
set SQUISH_PREFIX="%USERPROFILE%\Squish for Windows 6.5.2"
cd %SQUISH_PREFIX%
squishide -data %USERPROFILE%\workspace\squish-win

Caveats

Changing the SQUISH_USER_SETTINGS_DIR affects the IDE as well as the command line tools. Be sure to run the command line tools with the same desired environment if you wish to use them for the desired toolkit. This means you must make sure to set the command line environment to use the same variables when you are setting up batch-driven testing.

Changing the Squish IDE’s -data location does not affect the behavior of command line tools, it is purely a switch for the IDE.

Conclusion

Managing multiple toolkits involves managing either the Eclipse workspace directory, or the Squish settings directory. Management of the workspace directory allows you to get back into work where you left off more easily. Management of all other server settings makes it possible to have AUTs with the same name used in different toolkits.

Each approach has its pros and cons, depending on whether your AUTs all have unique names.

The post Managing Multiple Squish Editions on a Single System appeared first on froglogic.

Strategies for Higher Test Coverage

$
0
0

In many software projects, tests are neglected at the beginning of development, and focus is put on the design and features. This is normal: the first goal should simply be to produce software which works. Priorities change when a software is released, new features are added, and maintenance becomes an issue. Adding new functionality can become difficult, because any extension of the project can have unpredictable results on existing features.

Adopting a code coverage tool is one important method to ensure quality for already existing projects undergoing new developments. Here, we discuss using code coverage tooling efficiently as a test monitoring tool, and present strategies for higher test coverage to ensure product quality adapted to common development scenarios.

Choosing the Right Code Coverage Metric

Before instituting a test monitoring strategy, it is necessary to define what should be tracked and analyzed. Code coverage is a well known metric for tracking test progress, but there are many variants of it based mainly on the granularity. These variants include, function coverage, line coverage, statement coverage, decision coverage, condition coverage, Modified Condition/Decision Coverage (MC/DC), Multiple Condition Coverage (MCC). It is necessary to choose the most relevant one (if not imposed by, for example, a safety-critical standard.)

A common error is to track all to avoid selecting one. A general rule is to say that a single issue should be monitored by a single measurement, simply to make the analysis easier. Following all metric values makes the results difficult to interpret. The question is: how do we interpret the situation where one metric results in a gain in quality, whereas another metric does not?

In code coverage analysis, it is in principle not too difficult to choose the more adapted metric. The metrics can first be ordered with an increasing precision:

One property of the ordered metric list is that if code has 100% coverage with one metric, all preceding metrics will also have 100% coverage. So with 100% condition coverage, we also get 100% function, line, statement and decision coverage but not necessarily MC/DC and MCC coverage. It is thus not necessary to track two coverage metrics. It is only necessary to choose the good compromise between the precision of the metric and the tests exhaustively. A good coverage tool will allow recording all metrics.

Strategies for Integrating the Coverage Metrics

Recording the metrics alone is not sufficient — a strategy behind the coverage metrics must be in place. One strategy is to require a minimum coverage of the product.

Overall Minimum Code Coverage

This strategy is difficult to apply in general if the coverage recorded by unit tests and application tests are mixed. Coverage of application tests grow quickly, and, with few tests, 50% coverage can be reached. Rapidly, however, coverage reaches an asymptote, and it is difficult to reach over 75% coverage. The coverage of unit tests grows slowly with the amount of tests, but often results in higher coverage. In most cases, it is the only way to reach 100% coverage.

But, for an existing product without an automatic test suite, this strategy causes another issue. A product is released, even if not perfect, with a quality which is enough to be used. Having no formal tests does not mean that the product does not work. Fixing a coverage goal for the product, for example, to 90%, would require significant effort and may not be realistic. Second, does it make practical sense to write tests for legacy sources not touched for years, and for which developer experience shows that they work?

Coverage Threshold on New Commits

For the case described above, it is better to step aside from an overall coverage goal, and instead set the requirements on the newly developed features. With a code coverage tool, this is achieved in two different ways:

  • Comparison of two software releases.
  • A Patch Analysis (or, Test Impact Analysis) which permits an analysis of commits.

Comparing Code Coverage of Two Releases

By comparing the coverage of two releases, it is possible to get the coverage of the modified code between two releases. This strategy offers several advantages over monitoring the overall coverage:

  • It is common that the effort required for increasing the coverage is growing exponentially over time. If your product is 30% covered, 1% more is not much work. But if the coverage reaches 90%, getting an additional 1% coverage requires likely more effort than what it took to achieve the first 50%. By monitoring the coverage of the code developed between two releases, the effort stays constant from release to release.
  • This strategy does not force developers to write artificial tests on legacy code, simply to fulfill an overall coverage requirement.
  • This strategy makes the decision to release or not less arbitrary, by providing a more informed assessment of the quality of the new features in development.

Patch Analysis

The comparison of releases is a good metric for QA to monitor the quality of the product development. But let’s consider a common development scenario: once a product is released, often additional hot fixes need to be published. Doing so poses the risk of breaking existing features. A careful review of the code is good, but is difficult and not as robust as a quantitative check like a patch analysis.

A patch analysis decorates a patch generated by a Version Control System with:

  • Statistics about the coverage of the patch itself.
  • The list of tests impacted by the patch.
  • The list of lines not covered by tests.

With this information, the review is easier because the reviewer can analyze if the patch is correctly tested and if the risk to publish the fix is too high.

Wrap Up

A goal with using code coverage analysis in your development is to obtain a more informed assessment of your application’s quality. And, for existing software without automatic tests created from the get-go, using a code coverage tool can help ensure quality future product releases. Achieving high test coverage can be done in many ways, but it’s important to look at the strategies behind your coverage goals to make sure they remain efficient.

The post Strategies for Higher Test Coverage appeared first on froglogic.

The V-Model in Software Testing

$
0
0

The V-Model is a model used to describe testing activities as part of the software development process. The V-Model can be interpreted as an extension of the Waterfall development model, which describes the testing activities as one of the last steps in a sequential development process. However, in contrast to the Waterfall model, the V-model shows the relationship between each development phase and a corresponding test activity.

Even though the V-model is criticized because it simplifies the software development process, and because it does not completely map onto modern Agile development processes, it does include a number of terms and concepts essential to software testing. These terms and the concepts behind them can be leveraged to find a proper structure for the testing efforts in your software project. Besides, the V-Model can be used as a model for each iteration of an Agile project.

Introduction to the V-Model

The main idea behind the V-model is that development and testing activities correspond to each other. Each development phase should be completed by its own testing activity. Each of these testing activities covers a different abstraction level: software components, the integration of components, the complete software system and the user acceptance. Instead of just testing a monolithic piece of software at the end of the development process, this approach of focusing on different abstraction layers makes it much easier to trigger, analyze, locate and fix software defects.

The V-Model describes the development phases as the left branch of a “V”, while the right branch represents the test activities. Such a V-Model is shown in the following illustration:

The V-model in software development

Development Phases

The development phases from the V-model are commonly known either from the Waterfall model or as logical phases of real-world development pipelines. We will go through each phase.

Requirements Analysis

First, the requirements of the software system have to be discovered and gathered. This is typically done in sessions with users and other stakeholders, for example, through interviews and document research. The requirements are defined more precisely through the analysis of the information gathered.

The software requirements are stored persistently as a high-level requirements document.

System Design / Functional Design

Based on the output from the Requirements Analysis, the system is designed at the functional level. This includes the definition of functions, user interface elements, including dialogs and menus, workflows and data structures.

Documents for the system tests can now be prepared.

If the test-driven approach of Behavior Driven Development (BDD) is followed, the feature specification is created. This might be a surprise because BDD is strongly connected to the component level. However, the Squish BDD feature introduced BDD to the functional level.

Architecture Design

The definition of the functional design is followed by the design of the system architecture as a whole and its separation into components. Along this phase, general component functionality, interfaces and dependencies are specified. This typically involves modeling languages, such as UML, and design patterns to solve common problems.

Since the components of the system and their intersection are now known, the integration test preparations can be started in this phase.

Component Design

The next phase is about the low-level design of the specific components. Each component is described in detail, including the internal logic to be implemented, a detailed interface specification that describes the API, and database tables, if any.

Component tests can be prepared now given that the interface specification and the functional description of the components exist.

If the test-driven approach of Behavior Driven Development (BDD) is used for the component level, the feature specification for the individual components is created.

Implementation

The implementation phase is the coding work using a specific programming language. It follows the specifications which have been determined in the earlier development phases.

Verification vs. Validation

According to the V-Model, a tester has to verify if the requirements of a specific development phase are met. In addition, the tester has to validate that the system meets the needs of the user, the customer or other stakeholders. In practice, tests include both verifications and validations. Typically, for a higher abstraction level, more validation than verification is conducted.

Test Stages

According to the V-Model, test activities are separated into the following test stages, each of them in relation to a specific development phase:

Component Tests

These tests verify that the smallest components of the program function correctly. Depending on the programming language, a component can be a module, a unit or a class. According to this, the tests will be called module tests, unit tests or class tests. Component tests check the output of the component after providing input based on the component specification. For the test-driven approach of BDD, these are feature files.

The main characteristic of component tests is that single components are tested in isolation and without interfacing with other components. Because no other components are involved, it is much easier to locate defects.

Typically, the test implementation is done with the help of a test framework, for example JUnit, CPPUnit or PyUnit, to name a few popular unit test frameworks. To make sure that the tests cover as much of the component’s source code as possible, the code coverage can be measured and analyzed with our code coverage analysis tool Coco.

Besides functional testing, component tests can be about non-functional aspects like efficiency (e.g. storage consumption, memory consumption, program timing and others) and maintainability (e.g. code complexity, proper documentation). Coco can also help with these tasks through code complexity analysis and its function profiler

Integration Tests

Integration tests verify that the components, which have been developed and tested independently, can be integrated with each other and communicate as expected. Tests can cover just two specific components, groups of components or even independent subsystems of a software system. Integration testing is typically completed after the components have already been tested at the lower component testing stage.

The development of integration tests can be based on the functional specification, the system architecture, use cases, or workflow descriptions. Integration testing focuses on the component interfaces (or subsystem interfaces) and is about revealing defects which are triggered through their interaction which would otherwise not be triggered while testing isolated components.

Integration tests can also focus on interfaces to the application environment, like external software or hardware systems. This is typically called system integration testing in contrast to component integration testing.

Depending on the type of software, the test driver can be a unit test framework, too. For GUI applications, our Squish GUI Tester can drive the test. Squish can automate subsystems or external systems, e.g. configuration tools, along with the application under test itself.

System Tests

The system test stage covers the software system as a whole. While integration tests are mostly based on technical specifications, system tests are created with the user’s point of view in mind and focus on the functional and non-functional application requirements. The latter may include performance, load and stress tests.

Although the components and the component integration have already been tested, system tests are needed to validate that the functional software requirements are fulfilled. Some functionality cannot be tested at all without running the complete software systems. System tests typically include further documents, like the user documentation or risk analysis documents.

The test environment for system tests should be as close to the production environment as possible. If possible, the same hardware, operating systems, drivers, network infrastructure or external systems should be used, and placeholders should be avoided as far as possible, simply to reproduce the behavior of the production system. However, the production environment should not be used as a test environment because any defects triggered by the system test could have an impact on the production system.

System tests should be automated to avoid time-consuming manual test runs. Since the arrival of image recognition and Optical Character Recognition, Squish can automate any GUI application and trigger and feed non-GUI processes. In addition to verifications of application output through the GUI, Squish can access internal application data available in both GUI and non-GUI objects. Testing multiple applications, even based on different GUI toolkits through a single automated test; embedded device testing; access to external systems like database systems; and semi-automated testing to include manual verifications and validations, complete most of the needs for systems testing.

Acceptance Testing

Acceptance tests can be internal, for example for a version of a software which is not released yet. Often, these internal acceptance tests are done by those who are not involved in the development or testing, such as product management, sales or support staff.

However, acceptance tests can also be external tests, done by the company that asked for the development or the end-users of the software.

For customer acceptance testing, it might even be the customer’s responsibility, partially or completely, to decide if the final release of the software meets the high-level requirements. Reports are created to document the test results and for the discussion between the developing entity and the client.

In contrast to customer acceptance testing, user acceptance testing (UAT) may be the last step before a final software release. It’s a user-centric test to verify that a software really offers the workflows to be worked on. These tests might go as far as covering usability aspects and the general user experience.

The effort put into acceptance testing may vary depending on the type of software. If it’s a custom development, which has been developed for a single client, extensive testing and reporting can be done. For off-the-shelf software, less effort is put into this testing stage and acceptance tests might even be as slim as checking the installation and a few workflows.

Acceptance tests are typically manual tests and often only a small portion of these tests are automated, especially because these tests may be executed only once at the end of the development process. However, since acceptance testing focuses on workflows and user interaction, we recommend using Squish to automate the acceptance tests. BDD can be used as common language and specification for all stakeholders. Also, think about the effort that has be put into acceptance tests of multiple versions over the software lifecycle, or the many iterations in Agile projects. In the mid- and long-term, test automation will save you a lot of time.

The post The V-Model in Software Testing appeared first on froglogic.


Headless Execution of GUI Tests with Jenkins

$
0
0

Executing automated GUI tests using the Continuous Integration/Continuous Deployment (CI/CD) system Jenkins is a popular approach to streamlining automation. However, the computer on which to execute the test suite through Jenkins may be “headless.” That is, it may not have a physical display connected to it, or it may not have a graphical log on session in which the GUI application could eventually be rendered. Both points are commonly the case with rack-based computer systems, and both points may interfere with GUI automation in general.

Here, we discuss headless execution of GUI tests created with the Squish GUI Tester, using the Jenkins server on Unix platforms. We’ll walk through using our Squish-Jenkins integration plugin to get the kind of fast results and quick feedback you need for your project.

Potential Problems in Headless Setups

In such headless setups, one typically receives error messages related to lack of a logical display. When using Squish, these error messages are visible in the build’s page at Squish report > Server Log. Possible error messages include:

qt.qpa.screen: QXcbConnection: Could not connect to display
Could not connect to any X display.
application-specific initialization failed: no display name and no $DISPLAY environment variable
ERROR:browser_main_loop.cc(1485)] Unable to open X display.
Error: no DISPLAY environment variable specified
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Solution: Virtual Displays

For Unix systems, “virtual” displays can be used to avoid the need for a graphical log on session and/or a physical display for the computer where the tests should be executed.

One popular virtual display solution for Unix systems is Xvfb. Using Xvfb inside of a Jenkins project is easy when using the Jenkins Xvfb plugin.

To install and use this plugin in a Jenkins project, follow the steps outlined below:

Step 1: Open Jenkins Plugins Manager

  • Go to Jenkins (1) > Manage Jenkins (2) > Manage Plugins (3):
Jenkins Plugin Manager Menu

Step 2: Install Xvfb Jenkins Plugin

  • Select Available (1)
  • Enter “xvfb” into Filter: (2)
  • Check checkbox of entry Xvfb (3)
  • Click Install without restart or Download now and install after restart (4)

Xvfb is often already installed on Unix systems. To test if it is installed, simply execute which xvfb in a shell to see if it is found. (To install Xvfb on Ubuntu, execute sudo apt install xvfb in a shell.)

Xvfb Jenkins Plugin Installation

Step 3: Open Jenkins Project Configuration

  • Open the desired Jenkins project’s configuration (1, 2):

Step 4: Enable Xvfb Plugin for the Project

  • Select Build Environment
  • Check checkbox Start Xvfb before the build, and shut it down after. (2):

Step 5: Add a Shell Build Step

  • Select Build (1), Add build step (2), Execute shell:

Step 6: Enter Shell Script

  • Select Build (1)
  • At Command (2) enter the following:
# Output DISPLAY name, just to set what it is set
# to:
echo "DISPLAY: $DISPLAY"

# Start a window manager to have proper focus and
# window management:
fluxbox &amp;  

# Give the window manager some time to launch:
sleep 5

For this, it is required that the “fluxbox” window manager be installed. But in general, you can use any other windows manager. (To install fluxbox on Ubuntu, execute sudo apt-get install fluxbox.)

Also, it is important to have this step before any Squish build step (see below).

Step 7: Add Squish Build Step

  • Add the desired Squish build steps via Add build step (2) > Squish (3):

Wrap-Up

Using the Jenkins integration plugin, it’s easy to setup automated execution of GUI tests, even on computers without a display or monitor attached. Using a Continuous Integration tool like Jenkins provides users with the ability to launch their GUI tests via a simple mouse click, or trigger them by changes to their application or the test scripts themselves.

The post Headless Execution of GUI Tests with Jenkins appeared first on froglogic.

How-To: GUI Testing Remote Embedded Devices

$
0
0

New to Squish 6.6 is a fully-integrated remote control solution for improved testing of remote systems. With it, you can display the screen of the remote device in real-time as a test is executed, and even record new tests, including picking objects for inspection, even if the application under test is running on a computer in a remote location. This functionality removes the need for any additional remote desktop solution, and works for virtually any target, including embedded systems.

The best part? It’s a one-click solution. The data required for its operation is embedded within regular Squish communication. No longer will you have to worry about external remote control tooling problems of interoperability, compatibility, network setup, or multi-platform AUT requirements. In Squish 6.6, a click of a button displays your remote system’s GUI locally on your machine, with no special configuration or setup.

Remote Testing of an Embedded Device

We will demo this new feature using a mock-up of an In-Vehicle Infotainment (IVI) system running on a remote Raspberry Pi embedded board. The UI itself — Neptune UI — was developed in the Qt Automotive Suite and is implemented with QML.

Qt's Neptune UI Developed in Qt Automotive Suite
Neptune UI Touchscreen Display.

The vehicle’s passengers would increase or decrease the temperature to their comfort using the IVI touchscreen:

Neptune UI Screen for Changing Passenger Air Temperature
Passenger Heating and Cooling Settings.

You’ll notice the “+” and “-” buttons to set the desired temperature. Each press of the button increases or decreases the air temperature by 0.5 deg C.

We will utilize Squish to write an automated test which increases the driver’s side air temperature and then verifies that the display reading updates correctly. We will walk through using the Remote Control feature step-by-step:

Step 1. Set the remote AUT’s Hostname and Port

Within the Squish IDE, we navigate to Edit > Server Settings, and click Add... to define an Attachable AUT.

We set a representative name, and give the Host and Port number, as in the below screenshot:

Squish Server Settings for Defining Attachable AUT
Add Attachable AUT Server Settings.

Step 2. Verify the AUT launches

Squish allows you to display the screen of the remote AUT, even outside of record and replay. To verify our connection to the remote system, we’ll launch the AUT, using Run > Launch AUT.

In the icon bar, click on the Remote Control icon.

The ‘Squish Remote Control’ window should appear, displaying the GUI of our AUT. We’ll go ahead and quit the AUT, and begin recording.

Step 3. Record test case

The remote control functionality is not simply to display your AUT. You can actually pick and inspect objects as if your application is running locally. In a new test case, we click the Record button. The familiar Control Bar will appear. We’ll click on the Remote Control button to display our AUT.

Squish Control Bar: Remote Control
Squish Control Bar: Remote Control.

We can now record as usual. According to our test requirements, we need to click the “+” button on the touchscreen to increase the air temperature. We will increase temperature by 2 deg C, which requires four clicks.

Next, we want to verify the display reading updated correctly. We’ll insert a Property Verification Point using the Control Bar.

Squish’s Picker tool can be used for picking UI controls on the remote target. We’ll click the display reading (23 deg C) and set the VP. Our recorded script now looks as in the following code block:

# -*- coding: utf-8 -*-

import names

def main():
    attachToApplication("NeptuneUI_AUT")
    mouseClick(waitForObject(names.o_Label, 27015292))
    mouseClick(waitForObject(names.o_Label))
    mouseClick(waitForObject(names.o_Label))
    mouseClick(waitForObject(names.o_Label))
    test.compare(str(waitForObjectExists(names.o23_Text).text), "23")

To improve readability, portability and maintenance, we’ll refactor our script to give more meaningful object names, covered next.

Step 4. Refactoring

To make our test more concise, let’s re-work our touchscreen presses. We’ve written a helper function which simulates the passengers’ touchscreen presses by consuming the degrees to which the driver wishes to increase the temperature:

def tempIncrease(deg):
    presses = deg * 2;
    for i in range(presses):
        mouseClick(waitForObject(names.tempIncreasePress));

We’ve also renamed the vague o_Label object using the IDE’s built-in refactoring capabilities. Finally, we’ve refactored our VP for concision:

def isTempDisplayed(num):
    try:
        obj = waitForObjectExists( {"container": names.o_QQuickView, "text": num, "type": "Text", "unnamed": 1, "visible": True}, 5000)
        return True
    except LookupError:
        return False

Our refactored test case reads as follows:

# -*- coding: utf-8 -*-
source(findFile("scripts", "script_func.py"))

import names

def main():

    attachToApplication("NeptuneUI_AUT")
    #increase the temperature by 2 deg C
    tempIncrease(2)
    # verify the temperature reading
    test.verify(isTempDisplayed(23), "Expected Display Temperature Should Be 23")

We can then playback our refactored script and view the execution through the remote control video stream, as if the AUT were running locally on our machine. You’ll note the minimal effort required to test our embedded device: we simply set the AUT in Squish’s server settings, and the rest was a click away.

Additional Remote Control Settings

There are additional settings to remote control to aid in testing, a few of which are described below. These are available in the ‘Squish Remote Control’ window.

  • Disable Inputs. This button disables the user inputs forwarding. When enabled, the Remote Control dialog functions in view-only mode. This is useful to avoid interference with remote test case execution which is sensitive to the position of the cursor.
  • Remote Keymap. This button switches the key-press handling mode from local-keymap to remote-keymap. In the remote-keymap mode, the keys are forwarded as key codes and interpreted on the target OS using its current keyboard map. In the local-keymap mode, the keys are interpreted as symbols by the IDE using the local keyboard map and forwarded as symbols.
  • Extra Keys. This button activates buttons on the remote device which do not map to a regular PC keyboard, for example, volume buttons on the side of a smartphone or other controls on embedded devices.

Wrap-Up

Squish 6.6’s click-and-play remote control functionality greatly improves working with remote systems, whether they are desktop, mobile or embedded targets. The added convenience will allow you to record, replay, write, debug and execute tests on your remote device with improved speed and efficiency, without the sometimes complicated setup of external remote desktop software.

The post How-To: GUI Testing Remote Embedded Devices appeared first on froglogic.

Testing Qt for WebAssembly Applications with Squish

$
0
0

The Squish 6.6 release adds support for testing Qt for WebAssembly applications in Firefox and Chrome browsers. While we strive to make it as effortless as possible, the novel nature of this technology enforces some preparatory steps before WebAssembly content can be tested.

Squish Built-in Hook

The shared libraries specification on the WebAssembly platform is not yet available, so the Qt library supports building static libraries only. Because of that, the Squish built-in hook needs to be integrated with the Application Under Test (AUT) in order to make it testable. You can achieve this by altering a small part of the AUT source code. First, add a call to the Squish::installBuiltinHook() API call directly after constructing the QApplication object:

#include <QApplication>
#ifdef HAVE_SQUISH
#include "qtbuiltinhook.h"
#endif

int main(int argc, char* argv[])
{
    QApplication app(argc, argv);
#ifdef HAVE_SQUISH
    Squish::installBuiltinHook();
#endif
    [...]
}

Second, add the following snippet to the end of the QMake project file (.pro) which defines the AUT:

[...]

!isEmpty(SQUISH_PREFIX) {
  include(${SQUISH_PREFIX}/qtbuiltinhook.pri)
}

In case the AUT contains any QtQuick content, the project file needs to enable linking the relevant Squish extensions, too:

[...]

!isEmpty(SQUISH_PREFIX) {
  SQUISH_WRAPPER_EXTENSIONS=squishqtquick squishqtquicktypes
  include(${SQUISH_PREFIX}/qtbuiltinhook.pri)
}

Finally, the QMake command used to configure the AUT building needs to specify the path to the Squish for Qt for WebAssembly package:

qmake -makefile <...> SQUISH_PREFIX=/path/to/squish/for/wasm_32

Squish for Qt for WebAssembly binary packages are available in our download area. The binary packages are compatible with the official Qt for WebAssembly binary distribution. Squish users who use a custom-built Qt for WebAssembly will need to build Squish against it using the Squish Embedded SDK source package.

In order to confirm whether the built-in hook was embedded correctly into the AUT, you can load it into a web browser and check the web console for Squish initialization messages:

Squish hook initialization messages in the web console.
Squish hook initialization messages in the web console.

The WebAssembly AUT prepared in such a manner should be ready for testing with Squish. Users of Squish for Web should notice an additional application context when opening a webpage with an embedded Qt AUT. By switching between the default web context and the Qt context (as described in our documentation), you can interact with both the WebAssembly content and surrounding HTML within a single test case.

Testing Qt for WebAssembly without Squish for Web

If the tested web application consists entirely of Qt for WebAssembly widgets and does not require any browser interaction before it is shown (e.g., an HTML login form), then it should be possible to test such AUTs in supported desktop browsers using only Squish for Qt. In order to do so, the following preparatory steps must be taken:

  1. Create an empty directory for a new browser profile.
  2. Start the browser with the specified profile directory; you can specify the profile path using the --profile <path> flag for Firefox and --user-data-dir=<path> for Chrome. Install the appropriate browser extension in the new profile.
Web BrowserRequired Extension VersionSource
Mozilla Firefox4.1.0 or newerDelivered in the lib sub-directory of every Squish for Qt and Squish for Web installation
Google Chrome2.1.0 or newerAvailable in the Chrome Web Store
  1. Install the native messaging host utility with your web browser. The lib/exec/nmshelper sub-directory of the Squish for Qt installation contains an installer script which registers the utility with the supported browsers.
  2. Register the browser startup script as a Squish AUT using a built-in hook using the following commands:
# On UNIX systems
$ bin/squishserver --config addAUT firefox /path/to
$ bin/squishserver --config usesBuiltinHook firefox
# On Windows
> bin\squishserver --config addAUT firefox C:\path\to
> bin\squishserver --config usesBuiltinHook firefox
  1. Create a new Qt test suite in the Squish IDE. Specify the browser as the AUT, and add the profile path and the AUT URL as arguments in the Test Suite Settings editor.
Squish Test Suite Settings for setting a WebAssembly application under test.

Now, Squish should be able to start the web browser and hook the WebAssembly Qt content within. You can record, replay and debug test cases just as you would with native Qt apps.

In order to make the tests easier to execute on different test systems, it may be advantageous to move the browser command into a wrapper script and register the script with squishserver instead. This way, each test system may use a different profile directory and other command line options.

Troubleshooting

In case the web browser starts and the WebAssembly AUT is shown but fails to be recognized by Squish, please inspect the web console for any messages related to the Squish browser extension or native messaging host. You can find additional messages in the extension log.

In Firefox, the extension log is printed to the Browser Console, but in order to see all relevant messages, you may need to set the extensions.sdk.console.logLevel property to all on the about:config page and reload the AUT webpage with the browser console open.

In Chrome, the log is printed to a console which is a part of the background page for the extension. It is available by navigating to the Extensions page, enabling the developer mode and clicking the ‘background page’ link under the Squish extension.

If you are having any difficulties in starting WebAssembly applications with Squish, do not hesitate to contact our support team at support@froglogic.com. Please provide us with the web console log and the extension log to speed up troubleshooting.

Conclusion

Despite the unusual nature of the Wasm platform, testing Qt/WebAssembly applications with Squish should not be any different from testing regular desktop Qt software – except the initial setup. If the AUT can be run as a native application, most tests should easily port between that version and its WebAssembly counterpart.

Testing WebAssembly applications is currently possible on a limited set of platforms and browsers. We are working on extending the set of supported browsers and platforms in future Squish releases. In case you are interested in testing Qt for WebAssembly applications in an unsupported web browser, on embedded or mobile platforms or environments like Electron, please contact us at support@froglogic.com.

The post Testing Qt for WebAssembly Applications with Squish appeared first on froglogic.

Squish 6.6: Now Available for Download!

$
0
0

The froglogic team is excited to deliver a major release of the Squish GUI Tester, version 6.6, the software quality tool chosen by thousands worldwide for cross-platform desktop, mobile, web and embedded application GUI testing.

Squish 6.6 offers new features, usability improvements and a number of bug fixes for all users of the product. Read on to discover the latest additions to assist in your automated GUI testing efforts:

Real-Time Remote Control of Virtually Any Target

New to Squish 6.6 is a fully-integrated remote control solution for improved testing of remote systems. In just one-click, Squish will display the screen of your remote system, and allow you to pick objects for inspection, thus easing test recording, scripting, debugging and test playback on your remote device. This functionality is not limited only to desktop computers — it has full support for mobile devices and embedded systems, too.

The new feature removes the need for additional remote desktop software, and by extension, alleviates problems of interoperability, compatibility and network setup associated with these tools.

We’ve written a how-to guide on using the remote control feature for GUI testing an In-Vehicle Infotainment system. Follow along here.

Enhanced Scripting Language Support

Python Users

All Squish packages now include Python 2 and Python 3, ensuring full backwards compatibility with previous releases, while enabling the use of the latest Python 3 features. Squish users can select their desired Python version during installation.

JavaScript Users

Users developing their tests in JavaScript will benefit from improved scripting flexibility, better error checking, additional features for convenient and concise scripting, and several bug fixes to existing JavaScript support with the latest Squish.

Bundled Test Result Management & Analysis Platform, Test Center

Squish packages now bundle with the Test Center platform, a tool for comprehensive test report management and analysis. With a natural coupling between Squish and Test Center, users can push their automated test results right from the Squish IDE to Test Center to gain insights into an evolving project’s health.

Built to be lightweight with a convenient web-based user interface, all project stakeholders can easily access the platform right from a web browser on their computer, tablet or smartphone.

To get started with Test Center, an activation code unique to the tool is required. Get in touch with us to begin a free, fully-supported and fully-functional evaluation of Test Center today.

Qt for WebAssembly Support

Squish 6.6 adds support for testing Qt for WebAssembly applications in Firefox and Chrome browsers. Binary packages for Squish for Qt for WebAssembly are available in the download area, and are compatible with the official Qt for WebAssembly binary distribution.

Check out our detailed guide on getting started with testing WebAssembly content with Squish.

.NET Core Support

The Squish 6.6 release offers additional support for testing Windows Forms and WPF applications developed on .NET Core, Microsoft’s open-source, general purpose development platform.

Enhanced Android UI Automation

Squish for Android can now access all controls exposed via the accessibility API. This improves robustness of tests for applications based on self-drawn widgets, like in Flutter applications.

Changelog

Squish 6.6 brings additional features and code enhancements to all editions of the product. Review our changelog for a detailed list of all changes.

Download & Evaluation

Customers and existing evaluators can find the Squish 6.6 packages in their download area. New evaluators are welcome to try out Squish for free, with a fully-supported and fully-functional trial.

Join a Release Webinar

We’re hosting two live webinar and Q&A sessions on features new to Squish 6.6.

Members from our development and support teams will demo:

  • Remote Control functionality
  • Qt for WebAssembly support
  • .NET Core application support
  • Enhanced Android UI Automation
  • Scripting upgrades, Test Center & Squish coupling, general usability improvements and more.

For a detailed schedule of the webinar content and to register, click on the below link in your preferred time zone:

Squish Community

We encourage new and seasoned Squish users to follow us for the latest in:

  • Squish tips and how-to guides
  • Automated testing & software quality blogs
  • Weekly webinars
  • Conferences and meetups
  • froglogic product news and announcements

You can find us on Facebook, Twitter and LinkedIn.

Support

Our customer service team is available anytime for your support needs, big or small. Reach us at squish@froglogic.com.

The post Squish 6.6: Now Available for Download! appeared first on froglogic.

Squish 6.6 Release Webinars

$
0
0

We’re hosting two release webinars for Squish 6.6, this Wednesday and Thursday, July 8th and 9th, 2020.

Lead engineers from our development and support teams will demonstrate the latest additions to the Squish GUI Tester throughout this 1-hour webinar, including:

  • Scripting upgrades, Test Center & Squish IDE coupling, and general usability improvements for each edition.
  • One-click remote control of your target device.
  • Testing Qt for WebAssembly content in Squish.
  • Support for Windows Forms & WPF apps developed on Microsoft’s .NET Core.
  • Enhanced Android UI automation.

The webinar comprises 10-minute demo and discussion blocks for each topic, followed by a 2-minute Q&A portion for that topic.

Free Registration

We’re hosting two sessions for the Squish 6.6 release webinar. Register in your preferred time zone:

EMEA/Asia: July 8th, 11 AM CEST

Americas: July 9th, 11 AM EDT

The post Squish 6.6 Release Webinars appeared first on froglogic.

Viewing all 398 articles
Browse latest View live