Quantcast
Channel: froglogic
Viewing all 398 articles
Browse latest View live

Rebooting a Remote Test Environment From a Test Script

$
0
0

Sometimes we can’t launch our application from our test case, which is often the case if we perform tests on embedded devices, where the AUT is launched during device startup.

In such cases, we simply attach to the AUT at the beginning of each test. At the same time, we want to keep our test cases independent, so bringing back the AUT to the initial state is part of the cleanup of the previous test case.

Unfortunately, if something unexpected happens during execution, our AUT might get stuck in an unknown state or become frozen. The only way to bring it back to the desired point may be to restart the device.

In the following Python example, we’re going to show you the example implementation of this approach.

The rebooting will happen in the init() function. This way we are independent of the previously executed test outcome.
We will use the paramiko Python module to establish an SSH connection and execute a reboot command on the device. The paramiko module is not part of the Squish Python package, but can be easily installed with pip.

  1. Define the init() function that is called automatically before a test case main() execution:
def init():
	test.startSection("Test initialization")
	attachToApplication("AUT")
	if is_initial():
		test.log("AUT is in the initial state. Continue test execution.")
	else:
		test.log("AUT is not in the initial state. Reboot test environment")
		reboot()	
		waitForAut(timeout_s=20)			
	test.endSection()

2. Define is_initial() function that checks if the AUT is in the initial state:

def is_initial(timeout_s=20):
    """ custom code that checks if your application is in the initial state """
    try:
        waitForObject(names.initial_screen, timeout_s*1000)
        return True
    except:
        return False

3. Define thereboot() function that connects to the embedded device over SSH and sends a reboot command:

import paramiko

def reboot():
    ssh = paramiko.SSHClient()
    ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    ssh.connect(hostname=DEVICE_IP, username=DEVICE_USERNAME, password=DEVICE_PASSWORD)
    ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command("sudo shutdown -r now")
    if ssh_stdout.channel.recv_exit_status() > 0:
        raise RuntimeError("AUT reboot failed")

4. Define the waitForAUT() function that waits for the device to be rebooted and that the AUT is available for attaching again:

from datetime import datetime

def waitForAut(timeout_s=20):
    start = datetime.now()        
    while True:
        try:
            attachToApplication("AUT")
        except RuntimeError as e:
            if (datetime.now() - start).total_seconds() > timeout_s:
                raise RuntimeError("Can't attach to the AUT")
            else:
                pass

Please note that you can use a global cleanup() and init() to implement this solution.

The post Rebooting a Remote Test Environment From a Test Script appeared first on froglogic.


Upcoming Squish Feature: Remote Control of Test Devices

$
0
0

The ability to work with remote devices is one of Squish’s core features. While extremely powerful, it does not come without its disadvantages. During interactive test script recording it requires interaction with both the Squish IDE and the AUT. Debugging a faulty test script also usually requires preview of the current AUT state. This can be tedious if they are running on different machines, and it can become especially difficult if both systems are not in the same physical location.

Common Problems in Existing Remote Control Tools

Many platforms support a variety of dedicated remote control tools that could be used alongside Squish to resolve that problem. However, these tools come with some problems of their own:

  • Interoperability. If the controlled and the controlling platform are radically different, it may be difficult to find software that works on both systems.
  • Squish compatibility. Both remote control applications and the Squish GUI Tester use artificial UI events for their operation. This can lead to unwanted interference between them.
  • Setup. A separate set of tools may require a lot of work to become operational. This may include purchasing the license(s), installation and configuration, network setup, etc. It may need to be redone each time a new test system is added.
  • Availability. If you are testing a multi-platform AUT, you will probably need remote access to various kinds of test systems. In case the chosen software does not support some of them, you may be forced to use a heterogeneous solution which includes separate pieces of software and is difficult to use and maintain. Some embedded and mobile platforms do not offer any remote control software at all.

In order to help in overcoming some of these problems and to speed-up development of tests, the upcoming Squish release will include a remote control feature specifically tailored for GUI testing.

An Android AUT controlled by Squish IDE

The feature is designed as a testing aid and requires a connection to a working AUT on the target system. The data required for its operation is embedded within regular Squish communication. This means that the remote control can be used with minimal setup effort. In most cases, it should be just one click away.

Limitations

The comfort of working with a far-off device over the remote control is directly dependent on the available bandwidth. Despite the lossless compression used on the video stream, slow connections may prove insufficient for comfortable work. In such a case, you may opt for a loss-y compression of image data that requires less data to be transferred for the cost of some image distortion. The screenshots used for image search and verification are still sent using lossless compression – just as you’re used to.

Remote control should be available on any platform supported by Squish. However, on some of the platforms it cannot be used without additional setup.

As far as we know, no currently available Wayland compositor offers features required for remote controlling the user session. We are currently working on a set of plugins for all popular compositors. In order to access remote systems using Wayland, a corresponding plugin will have to be installed and enabled.

Remote control of a Gnome shell Wayland desktop with the froglogic plugin

Summary

The ability to see and to control remote test systems will let you avoid the need to move between different physical test setups or to install and maintain additional software. It will always remain compatible with Squish, and it will make recording and inspection of tests on multiple target systems easier and faster. It will grant the test writers instant access to devices in remote locations and minimize the need for interaction with the physical controls of the device.

The post Upcoming Squish Feature: Remote Control of Test Devices appeared first on froglogic.

Testing Multiple Instances of the Same Application

$
0
0

In some scenarios, you might need to test multiple instances of your application at the same time, for example if multiple desktop clients access a server. The Squish GUI Tester is capable of handling such scenarios from within one test script. We will demonstrate how to write a test script for this with the network-chat example from Qt.

The Interface of the Chat

You can identify the application you are sending your events to by using the ApplicationContext API. To change the current context, you call setApplicationContext with the new context.

def main():
    main=startApplication("network-chat")
    sub=startApplication("network-chat")
    snooze(2)
    setApplicationContext(main)
    message = "Hello. How are you?"
    sendMessage(main,message)
    setApplicationContext(sub)
    verifyMessageSent(sub,message)

def verifyMessageSent(message):
    textedit = waitForObject({"type":"QTextEdit"})
    raiseWindow()
    if message in str(textedit.plainText):
        test.passes("\""+Message+"\" "+" was sent")
    else:
        test.fail("Message was not sent.",message)

def sendMessage(message):
    lineedit = waitForObject({"type":"QLineEdit"})
    mouseClick(lineedit, 43, 3, Qt.NoModifier, Qt.LeftButton)
    type(lineedit, message)
    type(lineedit, "<Return>")

This setup requires a snooze, because the example from Qt takes some time to register the other instance. In some tests, this results in the message not being sent correctly.

But if you now execute a mouse click on the message input field, this might cause a failure, because there is another window in the way. When startApplication starts the processes for the two instances, the windows are stacked on each other. It is therefore necessary to bring one instance to the foreground.

Bringing the Correct Window to the Foreground

If you want to automate the instances correctly, it is mandatory that there is no window blocking your AUT. Moving one instance to the side would be one way, but it is easier to just bring the window of the active context to the foreground.

The code for this is simple. For Python you need a workaround to use the raise function because ‘raise’ is a reserved keyword in this language.

def raiseWindow():
    o = waitForObject({"type":"ChatDialog"})
    o.show()
    getattr(o, "raise")()
    o.activateWindow()

If you have more instances you are working with, it is better to adjust your functions to switch to the right context. You can see a modified version of the verifyMessageSent function below.

def verifyMessageSent(context,message):
    setApplicationContext(context)
    textedit = waitForObject({"type":"QTextEdit"})
    raiseWindow()
    if Message in str(textedit.plainText):
        test.passes("\""+message+"\" "+" was sent")
    else:
        test.fail("Message was not sent.",message)

We have made the same adjustment to the sendMessage function, which you can see in the complete code at the end of this article.

Now you can properly switch between, and automate, two instances of the same application.

In case you want to perform the test with more instances, you can get the other context references when you start the instances. Another option would be to use the function applicationContextList and iterate over the available application context objects.

Conclusion

You can see the complete code for the test case below. The chat program recognizes all Ethernet adapters as potential clients. The verifyMessageSent function fails when you check if the client received the message only once. When using this for other applications you need to adjust the real names. Otherwise Squish will not find the right objects.

def main():
    main=startApplication("network-chat")
    sub=startApplication("network-chat")
    snooze(2)
    message = "Hello. How are you?"
    sendMessage(main,message)
    verifyMessageSent(sub,message)

def verifyMessageSent(context,message):
    setApplicationContext(context)
    item = waitForObject({"type":"QTextEdit"})
    raiseWindow()
    if message in str(item.plainText) and str(item.plainText).count(message)==1:
        test.passes("\""+message+"\" "+" was sent")
    else:
        test.fail("Message was not sent correctly.",
                  "Message sent: "+str(message in str(item.plainText))+
                  "\nMessage received once: "+str(str(item.plainText).count(message)==1))

def sendMessage(context,Message):
    setApplicationContext(context)
    mouseClick(waitForObject({"type":"QLineEdit"}), 43, 3, Qt.NoModifier, Qt.LeftButton)
    type(waitForObject(names.chatDialog_lineEdit_QLineEdit), Message)
    type(waitForObject(names.chatDialog_lineEdit_QLineEdit), "<Return>")
    
def raiseWindow():
    o = waitForObject({"name":"ChatDialog","type":"ChatDialog"})
    o.show()
    getattr(o, "raise")()
    o.activateWindow()

The post Testing Multiple Instances of the Same Application appeared first on froglogic.

Setting Up a Python Virtual Environment with Squish

$
0
0

Python Virtual Environments are great tools for separating different Python configurations on the same machine. Configuring Squish to use a virtual environment gives you the freedom to install packages and make changes that aid your testing efforts without compromising your existing Python installation(s).

Here we will show you how to set up Squish to use a Python Virtual Environment for testing.

First, install Python (ver 3.3 or higher) to use the virtual environment functionality. For Squish to be working with your Python installation, it has to be compiled with the same version. This is done by either asking our support staff to prepare a binary Squish edition with that Python version or by compiling Squish from sources.

Quick note for the configuration flags to compile Squish with Python 3: 
$ <SQUISH_SOURCES>/configure … --disable-python --enable-python3
--with-python3=<PYTHON3_DIR_PATH>

At this point, make sure that the environment variables (PATH, PYTHONPATH, PYTHONHOME) point towards the Python installation from which you are creating a virtual environment. Then you can execute:

$ pip install virtualenv
$ python -m venv <PYTHON_VENV_PATH>

After the virtual environment is created, configure Squish to use it by altering the paths.ini in <SQUISHDIR>/etc/.

In LibraryPath change @(SQUISH_PREFIX)/python to <PYTHON_VENV_PATH>/bin
(<PYTHON_VENV_PATH>/Scripts on Windows).
Set Scripting/PythonHome to "<PYTHON_VENV_PATH>".

There are more detailed instructions on how to change the Python installation of Squish here. Now you are good to go!

The post Setting Up a Python Virtual Environment with Squish appeared first on froglogic.

“Squish is a powerful tool.” Students at Collegium Balticum Get Their First Training on the Squish GUI Tester

$
0
0

Students at Collegium Balticum in Poland participated in a two-day intensive course on the Squish GUI Tester, led by Dr. Marcin Caryk of infinIT Codelab. During the lectures, students initially spent time learning the basic Squish functions, including creating test suites and test cases and recording their first test scripts. In the second day, course attendees began writing their own tests in Python using built-in Squish libraries, and spent time exploring more deeply the Object Map and various UI controls and Squish’s more advanced functions. Students reported being happy to learn and use the tool, noting especially its breadth of application usage. Lectures will again take place with a new set of students next year, and we at froglogic look forward to hearing feedback from the next round of lecture attendees. 

The post “Squish is a powerful tool.” Students at Collegium Balticum Get Their First Training on the Squish GUI Tester appeared first on froglogic.

GUI Test Configuration Through Script Arguments

$
0
0

GUI test exeutions have to be configured for various purposes. Test engineers might want to configure a test in order to use specific test data as input to the Application Under Test. Or the Application Under Test itself has to be executed with a variation of program arguments.

While many tests are self-contained, there are cases in which the configuration is done externally, for example through a scheduler that drives the test executions, such as a Continuous Integration server. In these cases, the squishrunner executable can be used to pass arguments to the test scripts. The test script may access those arguments and adopt its behaviour based on the values.

Passing Script Arguments to squishrunner

The squishrunner option that supports this is called --scriptargs. It has to be specified at the very end of the complete squishrunner command call. All arguments which are specified after --scriptargs are passed to the executed script.

The following example shows a squishrunner call that passes two arguments to the test script:

./squishrunner --testsuite /path/to/suite --testcase tst_case1 --reportgen xml3.2,/path/to/result --scriptargs argument1 argument2

Accessing these arguments in the test script depends on the scripting language that is used. Assuming that Python is used, we need to use the sys module which provides a list of command line arguments through the argv variable. The following snippet shows how to log all arguments to the Squish Test Result. Visit the Squish documentation to learn how to access arguments in JavaScript test scripts.


import sys
def main():
   test.log('Number of arguments:' + str(len(sys.argv)) + 'arguments.')
   test.log('Argument List: ' + str(sys.argv))

When executing the snippet above, you will notice that the first field, argv[0], holds the name of the script itself. This is common practice in many scripting languages. The first argument we pass to the squishrunner --scriptargs option can be accessed through argv[1].

Configuring an Example Application through Script Arguments

Let’s get back to one of the initial examples: we want to pass a program argument to configure the AUT. Let’s assume this argument is called --nolauncher --autoconnect. The command for such an application startup could look like this:

./exampleApplication --nolauncher --autoconnect

To pass these arguments to the test script, the squishrunner call would look like this:

./squishrunner --testsuite /path/to/suite --testcase tst_case1 --reportgen xml3.2,/Users/flo/tmp/result --scriptargs --nolauncher --autoconnect

Finally, we need to access the arguments in the test script itself, and pass it to the startApplication call:


import names
import sys

def main():
   autCommand = "exampleApplication"
   for i in range(1, len(sys.argv)):
      autCommand = autCommand + " " + sys.argv[i]
   startApplication(autCommand)
   ...

The post GUI Test Configuration Through Script Arguments appeared first on froglogic.

QASource: How Squish Simplifies Cross-Platform Testing

$
0
0

Here at QASource, our 700+ engineers work with a variety of tools daily to test various software applications in diverse industries for different businesses, from startups to Fortune 500 companies. So it’s safe to say we’ve seen our fair share of software testing tools.

Today’s trends in automated testing have most engineers looking for open-source and commercial tools that support GUI automation for a variety of technologies. So when it comes to finding the right tool for automating application test cases for such a broad technology spectrum, choices can be slim.

But Squish changes that. It is now one of the single most popular tools for automated GUI regression testing. In addition to the robust cross-platform support, it provides a handy toolkit for automating web, embedded, desktop, and mobile applications—all in one place, without having to resort to a handful of different tools.

This versatility is what our QA engineers like most about Squish, in addition to these other benefits:

  1. Data- and Behavior-Driven testing. Automation teams are shifting their focus toward Data- and Behavior-Driven testing, and Squish supports a variety of data sources and is 100% compatible with the Gherkin language.
  2. Support for popular scripting languages. Squish supports JavaScript, Python, Perl, Ruby and Tcl for script authoring, allowing QA personnel to use a familiar language when writing test scripts. What’s more, these languages are vastly more expressive than trees or lists of steps, and offer the advantage of reuse of a large ecosystem of existing, built-in modules. 
  3. Visual Verification.  Squish provides support for hybrid verification using their algorithm to verify combinations of object properties along with screenshot verification. This helps to reduce false positives and allows for quick troubleshooting.
  4. Image-based testing. There are cases where Object-based recognition is not suitable due to unsupported toolkits or custom UI controls. Squish provides robust Image-based recognition methods to handle such applications. Some of these features include a per-pixel tolerance, image cross-correlation and Fuzzy Image search.
  5. Distributed Batch Execution. QA teams commonly run test suites on multiple workstations for faster execution. Squish supports distributed batch execution to help QA teams finish testing with better efficiency.  
  6. Integrated Development Environment. Squish offers its own powerful, Integrated Development Environment (IDE), which allows teams to write and debug their test scripts smoothly.
  7. Extensive integration. Squish pairs with various Continuous Integration (CI) tools like Jenkins, so batches can be scheduled from anywhere. It also integrates with other top tools like Ant and Maven.
  8. Record and playback easily. New users can easily record the script for their application and convert it into code in a supported language.
  9. Helpful customer support. An active, responsive and efficient customer support team helps resolve any questions and address customization requirements during testing.

Testing Windows, web, embedded, or mobile applications? Squish can help. froglogic offers customizable packages based on your testing needs, whether it be a combo package for Windows and Qt or Windows and web, or other. It also allows you to switch between all of these application types to automate your end-to-end test scenarios.

Squish provides plenty of automation power for teams looking to test applications across multiple platforms, allowing work to get done faster and engineers to have clearer insight into reporting. Plus, the ability to contact a responsive, knowledgeable support team is a huge bonus!
We have a lot of experience with automated testing tools, and we feel confident in recommending Squish for your automated testing needs. If you’re looking for a team to deliver test automation services, we may be able to help. We’d love to learn more about your current testing project, and you can find us at QASource.com.

The post QASource: How Squish Simplifies Cross-Platform Testing appeared first on froglogic.

Coco and System Header Files

$
0
0

A C or C++ program always includes header files that are provided by the compiler or the operating system. When they are instrumented by Coco, sometimes unexpected error messages occur.

This post explains why this happens, when this happens, and what to do about it.

A Concrete Example

Here is an example of such an error. It occurs during the compilation of the TextEdit example for Coco with MinGW on a Windows machine:

mingw32-make[2]: Entering directory 'C:/Users/Someone/textedit/textedit_v1'
csg++ -c -o textedit.o textedit.cpp
c:/mingw32/include/c++/bits/random.h:73: col 44: Warning (Squish Coco): syntax error, unexpected ';', expecting '}'
c:/mingw32/include/c++/bits/random.h:117: col 45: Warning (Squish Coco): syntax error, unexpected ';', expecting '}'
c:/mingw32/include/c++/bits/random.h:200: col 4: Warning (Squish Coco): syntax error, unexpected '}', expecting $end
c:/mingw32/include/c++/bits/random.h:6066: col 2: Warning (Squish Coco): syntax error, unexpected '}', expecting $end

More error messages follow, and the compilation fails:

Fatal Error (Squish Coco): Could not generate object file

This is a very specific kind of failure. That is, the error message does not occur in the source files of the project.

One can see in the first line of the compilation log that the project resides in the directory C:\Users\Someone\textedit, but the error occurs in the file C:\mingw32\include\c++\bits\random.h, which is outside the project. The file is instead a header file of the C++ standard library that comes with MinGW.

Related to this is another notable difference to other Coco errors: the error is compiler-specific. When the same project is compiled with, for example, Visual Studio, the error does not show up.

How to Handle it

As in many cases in which Coco cannot handle a file, we must exclude it from instrumentation. As we will soon see, it is better to exclude all system header files. The error messages give us the information we need.

In the example, MinGW is installed in the directory C:\mingw32, and the paths to all its system header files begin with “C:\mingw32\include“. The simplest way to exclude these files is therefore to add a compilation option

--cs-exclude-file-abs-wildcard=C:\mingw32\include\*

But why is this necessary? And why can’t Coco handle this case on its own?

Files that are Excluded

The answer is that in most cases, Coco does exclude system header files on its own. But to exclude them, Coco needs to know where they are.

In the following situations, the location of the system system header files is known to Coco and they are excluded automatically:

  • Under Unix, when a built-in compiler like gcc or clang is used. The system headers are then in the directory /usr/include and its subdirectories are excluded automatically.
  • Under Windows, when Visual Studio or MSBuild are used. Coco then knows how to find the system header files that come with the compiler and how to exclude them.

For other build systems, the system directories must be given explicitly. The most common cases are:

  • MinGW, as we have seen, and,
  • Most cross-compiler toolkits, under Linux and also under Windows.

For cross-compiler toolkits it is often also necessary to create a special version of the compiler wrapper. The documentation shows how this is done.

Why System Header Files Must Be Excluded

We have not yet explained why files like random.h cause an error.

This is a trade-off in the design of Coco. System header files may contain compiler-specific code or syntax that is never used in application code. One could extend Coco’s parser to handle these cases, but it would be:

  1. An enormous effort, because the work needs to be done for every compiler, and,
  2. Unnecessary, since the goal of a coverage measurement tool is to measure the coverage of an application and not of the system libraries.

Coco is therefore built to work with standard C and C++ code, and it automatically tries to exclude the system header files. The only disadvantage of such an approach is that sometimes Coco cannot find the header files.

What Happens When a File is Excluded

When Coco is instructed to exclude a file from instrumentation, the CoverageScanner reads it in an accelerated mode. It does not parse it and does not insert any instrumentation code in it.

When a header file is excluded, it is ignored in all cases in which it is read by the compiler, in reaction to an #include statement.

Other Reasons to Exclude Files

Even if a file can be parsed by Coco, it may be a good idea to exclude it. In C++, this is especially true for template libraries like the Standard Template Library. Such a library contains template code for data structures, like std::vector or std::map. These classes are built for efficiency, and when they are instrumented, the additional code may slow down the instrumented program considerably. This is the other reason why in the example we excluded the whole MinGW library directory tree and not just a few files with unusual syntax.

The same is true for C and C++ libraries in general. Some of them contain template data structures and most contain inline code. (Note that newer versions of C allow an inline statement too). They will be instrumented by default and may slow down the program; in every case they will show up in the coverage report even if they do not belong there.

Other Languages

For completeness:

  • C# does not have an include statement, therefore the problem does not occur.
  • For QML coverage, the Qt library files are excluded by default. When the environment variable COCOQML_DEFAULT_BLACKLIST is set to 0, they are included again.

The post Coco and System Header Files appeared first on froglogic.


Multi-User, Blackbox Testing with Squish Coco

$
0
0

Software development success is often achieved in a team of engineers, and part of this success is achieved through thorough, comprehensive testing. In some settings, source code is shared among all involved on a given team, but in many cases, the source code is secured, and only parts of it are available to a given engineer. Imagine the case where you have a distributed team of test engineers, all of whom do not have access to the application’s source code. Ideally, we would like to collate the manual tests of each test engineer and the unit tests of the developers into one global coverage report, to analyze the testing efforts of the whole team. In most code coverage tools, this sort of blackbox testing is not possible: without the source code, the tool is not able to produce coverage metrics. This is not only possible in Squish Coco, but easy and quick to achieve. In this week’s tip, we’ll cover how to handle multi-user testing in Squish Coco in order to get the coverage of all tests from all engineers, even if the source code is not available to all on the team.  

To demonstrate this, we will be using a C++ parsing program that acts simplistically as a calculator for standard expressions. This program is packaged with all versions of Squish Coco. In this example, we have a master user who has access to the source code, and additional test engineers who do not have access to the source code, but are completing manual tests. The developer in charge of the source code has written a number of unit tests.

Unit Tests

To begin, the developer executes the unit tests:

$ make clean
$ ./instrumented make tests

We can open the coverage report within the CoverageBrowser:

$ coveragebrowser -m unittests.csmes -e unittests.csexe

In the condition coverage view, we can see the coverage is at 45.581%. Taking a slightly deeper look into one of the functions, factorial, we see a 0.000% condition coverage. It’s clear that no unit tests have been written for the factorial function. 

Distributing the Application

The developer will now ship the instrumented application to the manual testers. First, he or she will instrument the application, so that the coverage record of the manual tests will be recorded.

$ ./instrumented make

This generates an executable for the parser program, called parser. The developer will ship this to the testing team for further testing. Whenever this program is run, and manual tests are entered into the parser, a *.csexe file will be generated. This is the execution report. 

Creating a Blackbox Database

If the intent is to keep the source code secure, the developer can create a ‘blackbox’ database in which execution reports can be imported, but no source code is shown. This is achieved through the following:

$ cmmerge -o parser_blackbox.csmes --blackbox parser.csmes

Manual Tests

With the executable distributed to the testing team, the first test engineer issues some tests to cover the factorial function:

> 3!

	Ans = 6

> 0!

	Ans = 1

> 1.54!

       Error: Integer value expected in function factorial (col -1)

And a parser.csexe file is now generated. 

This can be opened in the CoverageBrowser:

$ coveragebrowser -m parser_blackbox.csmes -e parser.csexe      

…where “parser.csexe” is the execution report generated by doing the manual tests. 

We see that, by using a blackbox database, the source code is not viewable, and only a  reduced amount of information is shown (i.e., the global coverage, a window for comments, etc.) Click Save before exiting the CoverageBrowser

Merging the Results

Once the manual testers have completed their work, it is up to the developer to merge all coverage data in order to get a full scope of the coverage. 

The developer will first need to import his or her unit tests. In other words, load the execution report:

$ cmcsexeimport -m unittests.csmes --title=‘UNIT-TESTS’ unittests.csexe

Finally, the developer can merge all reports into one, called “all_tests.csmes.”

$ cmmerge -o all_tests.csmes -i parser.csmes parser_blackbox.csmes unittests.csmes

Opening this in the CoverageBrowser, we see the following:

Note that in the bottom right hand corner of the window, we can toggle the execution view to show coverage levels from manual tests and unit tests (located in CppUnit). 

In the above screenshot,  we also verify that factorial is now 100% covered, owing to our manual tests. 

Generalizing to Multiple Testers

The example above covered only one developer-tester pair, but the same steps can be generalized to multiple testers. 

To collate all results, issue the following:

$ cmmerge -o all_tests.csmes -i parser.csmes uniittests.csmes parser_blackbox.csmes parser_blackboxUSER2.csmes ... parser_blackboxUSERN.csmes

Note that each user should have different names for their tests. In the above screenshot, we’ve named the one tester’s efforts “TESTER1.” 

The post Multi-User, Blackbox Testing with Squish Coco appeared first on froglogic.

Hooking Into Java Applications Launched Via Third Party/Custom Launcher Applications.

$
0
0

Sometimes, while trying to automate a Java AUT the hooking fails and the following error is shown:

Either the application could not be started or it was started but Squish failed to hook into it.

In such cases it is possible that the AUT executable or binary is actually a custom launcher application.

The launcher applications are small executable programs which load and start a Java Application. Squish sets some environment variable at the starting of the AUT in which additional JVM arguments are placed and passes these arguments to the launcher application. When for some reason the hooking does not work, we then have to provide these arguments manually.

I will take NetBeans IDE 8.1 as an example to explain how to pass the JVM arguments manually.

Because NetBeans IDE supports _JAVA_OPTIONS, I will first unset _JAVA_OPTIONS to simulate the hooking issue.

Now, in order for Squish to be able to hook into the application we will create a batch file called “Netbeans.bat” and place it in my Home folder. This batch file will have the following content:

set _JAVA_OPTIONS=
cd "C:\Users\Neha\Netbeans\NetBeans 8.1\bin"
netbeans "-J%SQUISH_JAVA_DEF_1%" "-J%SQUISH_JAVA_DEF_2%" "-J%SQUISH_JAVA_DEF_3%" "-J%SQUISH_JAVA_DEF_4%" "-J%SQUISH_JAVA_DEF_5%" "-J%SQUISH_JAVA_DEF_6%" "-J-Dsquish.bcel=%SQUISH_PREFIX%\lib\bcel.jar"

Which basically unsets the _JAVA_OPTIONS environment variable and then starts the netbeans.exe with the JVM arguments.
Once this is done, then in the test suite’s settings we will set Netbeans.bat as the Application Under Test.

This is how hooking can be managed by providing additional Java VM parameters manually to a launcher created by NetBeans. The way of passing these parameters can differ depending on the technology on which the launcher is based.

The post Hooking Into Java Applications Launched Via Third Party/Custom Launcher Applications. appeared first on froglogic.

Improved Syntax Highlighting for the Script-based Object Map

$
0
0

With the release of the Script-based Object Map, we also added some syntax highlighting options to the Squish IDE. As references to the Script-based Object Map will make up a big part of your test scripts, it’s important to be able to see at first glance which scripted object names are involved. The improved readability of the new highlighting options should therefore also increase the maintainability of your UI automation test scripts.

The following image shows the default way a basic JavaScript test case is displayed within the Squish IDE.

Example of default highlighting options

The next image is an example of the added highlighting options.

Example of new highlighting options

JavaScript Highlighting Options

As you can see, the added highlighting options improve the readability significantly. For JavaScript, you can find the new highlighting options at:

Edit ⭢ Preferences ⭢ JavaScript ⭢ Editor ⭢ Syntax Coloring ⭢ Identifier
JavaScript highlighting options menu

In this menu, you can edit both the Properties and Variables highlighting options to your liking.

Python Highlighting Options

For Python, you can find the new highlighting options at:

 Edit ⭢ Preferences ⭢ PyDev ⭢ Editor 
Python highlighting options menu
Added Python Highlighting Options

Since PyDev provides more limited information about the processed source code, the variables and property locations are only approximated. The resulting highlighting is nonetheless very comparable to its JavaScript counterpart.

Perl Highlighting Options

For Perl, we didn’t need to make any changes as it already supports highlighting variables. You can find the Perl highlighting options at:

 Edit ⭢ Preferences ⭢ Perl ⭢ Editor ⭢ Syntax ⭢ Variable

Tcl Highlighting Options

Tcl also has existing support for highlighting variables. You can find the Tcl highlighting options at:

 Edit ⭢ Preferences ⭢ Tcl ⭢ Editor ⭢ Syntax Coloring ⭢ Core ⭢ Variables 

Ruby Highlighting Options

While Ruby also has many highlighting options, none really fit the needs of the Script-based Object Map. We hope to be able to provide better highlighting options for Ruby in the future. You can find the Ruby highlighting options at:

 Edit ⭢ Preferences ⭢ Ruby ⭢ Editor ⭢ Syntax Coloring

The post Improved Syntax Highlighting for the Script-based Object Map appeared first on froglogic.

Accessing QQmlContext Properties in Squish Test Scripts

$
0
0

When loading a QML object into a C++ application, it can be useful to embed some C++ data directly that can be used from within the QML code. This makes it possible, for example, to invoke a C++ method on the embedded object, or use a C++ object instance as a data model for a QML view.

The ability to inject C++ data into a QML object is made possible by the QQmlContext class. This class exposes data to the context of a QML object so that the data can be referred to directly from within the scope of the QML code.

QQmlContext Properties and Squish

When testing a QML object which relies on embedded C++ data, it is possible to perform verifications and modify its values, by obtaining a QQmlContext instance and using its contextProperty function.

Obtaining the QQmlContext

There are different ways of obtaining a QQmlContext in a Squish test script.

The most straightforward way to access the so-called rootContext of a QQmlEngine is by obtaining a reference to the engine and invoking its rootContext function.

Squish exposes a global function named qmlEngine which accepts a QObject and returns the QQmlEngine associated with the object if any.

def getRootContextFromEngine(qObject):
    engine = qmlEngine(qObject)
    test.verify(not isNull(engine), "Engine fetched from QObject is valid")

    return engine.rootContext()

An alternative way to obtaining the QQmlContext for a given object is by using the global function named qmlContext which accepts a QObject and returns the QQmlContext associated with the object if any.

def getContextFromQObject(qObject):
    context = qmlContext(qObject)
    test.verify(not isNull(context), "Context fetched from QObject is valid")
    
    return context

It is also possible to obtain the rootContext associated with a QObject. This is achieved by first getting the QQmlContext and then traversing the parentContext hierarchy up to the root.

def getRootContextFromQObject(qObject):
    context = getContextFromQObject(qObject)
    
    while not isNull(context.parentContext()):
        test.log("Going up one QML context")
        context = context.parentContext()
    
    return context

There is yet another way, though this works only in combination with QQuickView and its sub-types.

QQuickView exposes a function named rootContext, which can be used to retrieve the view’s rootContext as shown in the example in the following section.

Working with the QQmlContext

Once obtained, the QQmlContext can be useful for a variety of things, the most useful in Squish test scripts is accessing embedded C++ data.
When accessing C++ data through the contextProperty function on a QQmlContext, a QVariant is returned which can simply be unpacked with object.convertTo as shown in the examples.

An example usage could be to verify that the ListView associated with a ListModel has the same number of entries.

Suppose we want to test an animalList application which has a ListView to display a list of animals, where the animals are provided through a C++ class named animalModel, which is exposed to QML.

import names

def main():
    startApplication("animalList")

    rootContext = waitForObject(names.o_QQuickView).rootContext()
    
    animalModel = object.convertTo(rootContext.contextProperty("animalModel"), "QObject")
    
    test.compare(animalModel.rowCount(), waitForObjectExists(names.o_ListView).count, "Verify all model entries are contained in the ListView")

The above example is a rather theoretical example as it tests if Qt works correctly, but it could be useful in an application where you have your own ListModel/View implementation.

Yet another example usage could be to ensure a ListView is visually updated whenever its ListModel content changes.

import names

def main():
    startApplication("animalList")

    rootContext = getRootContextFromEngine(waitForObject(names.o_ListView))
    
    animalModel = object.convertTo(rootContext.contextProperty("animalModel"), "QObject")
    
    animalModel.removeRow(0)
    
    test.vp("VP1", "Verify UI is updated when model is")

Wrapping Up

Accessing the QQmlContext and its properties using Squish’s built-in functions is easier than one might expect, especially unpacking the QVariant returned by the contextProperty function to its actual type.

It is enough to use “QObject” as the second argument to object.convertTo, because the QVariant knows to which type to expand.

The global function castToQObject sounds quite similar, although can lead to crashes in some cases. It is therefore recommended to stick with object.convertTo.

This technique should be used with caution, otherwise you might end up with test cases that are bound to the internals of your Application Under Test.

The post Accessing QQmlContext Properties in Squish Test Scripts appeared first on froglogic.

Verifying the Identity of JSON Texts

$
0
0

Test data might not always be present in the form of line-based file formats, like Comma-separated Values or Tab-separated Values. The JavaScript Object Notation (JSON) format — a lightweight subset of the JavaScript language — gained a lot of popularity as an exchange format when communicating with Web Services in the past years. The application to be tested might offer an export functionality that allows the export to be in JSON format, and we are supposed to verify the exported file is correct. In addition, the exported file is not required to match the expectation character-by-character. So object keys are not required to show up in a given order, and there are also no constraints to whitespace usage. This flexibility makes it hard to come up with a simple solution when it is about to compare two JSON texts.

Example Usage with Squish

Luckily, Squish provides the test.compareJSONFiles(expectedFile, actualFile) function that takes care of that flexibility while comparing two JSON files.

For this example, we will use Python as the scripting language, but the function is available in all other scripting languages Squish supports as well.

Let’s test drive the test.compareJSONFiles() function using one of the various JSON formatting webservices. The test case is defined easily: we inject a JSON text, and – after pressing a button – expect the reformatted JSON text to be equal to the given one.

We create a new test suite and test case, and a test data file ‘input.json’ that holds a sample input JSON text:

[1,
2,
{"success": false, "error":
31},
3]

And we add a test script like this:

# -*- coding: utf-8 -*-

import names

def main():
    startBrowser("https://jsonformatter.curiousconcept.com/")

    with open("testdata/input.json", "r") as inp:
        inputData = inp.read()

    typeText(waitForObject(names.formatter_input_area), inputData)

    selectOption(waitForObject(names.formatter_options_select), "Compact")
    clickButton(waitForObject(names.formatter_process_button))
    formatted = waitForObject(names.formatter_result_area).innerText

    with open("/tmp/actual.json", "w") as out:
        out.write(formatted)

    test.compareJSONFiles("testdata/input.json", "/tmp/actual.json")
    test.compareTextFiles("testdata/input.json", "/tmp/actual.json")

We start a new browser with the given URL (line 6). We read the JSON text we want to pass to the web service from our test data file (8). We type that text into the corresponding input area (11), choose a different formatting option to make the formatting web service actually do some changes to our input data (13) and pick up the re-formatted JSON text from the result area (15). To comply with the test.compareJSONFiles() API (it takes paths to files in the filesystem), we write the retrieved JSON text to disk (17).

Finally we compare our two files. This is the relevant part in the test results:

<scriptedVerificationResult time="2019-07-11T14:45:36+02:00" type="PASS">
...
    <text>JSON Comparison</text>
    <detail>Files are equal</detail>
</scriptedVerificationResult>

As confirmation how different the files are on a character-by-character level, we check what test.compareTextFiles() reports:

<scriptedVerificationResult time="2019-07-11T14:45:36+02:00" type="FAIL">
...
    <text>Plain Text Comparison</text>
    <detail>'input.json' and 'actual.json' are different. Diff:
-   [1,
- 2,
- {"success": false, "error":
- 31},
- 3]
+ [1,2,{"success":false,"error":31},3]</detail>
</scriptedVerificationResult>

So while the latter function shows how both JSON files differ textually, compareJSONFiles() allows us to determine that both JSON texts in fact equal each other.

The post Verifying the Identity of JSON Texts appeared first on froglogic.

Starting Your AUT Using Absolute Paths in startApplication()

$
0
0

To start an application in a Squish test script using startApplication(),

startApplication(name)

(where name refers to an entry in the server.ini file), one registers the AUT with its absolute path in the file server.ini, as opposed to hard-coding the path in the test script, and modifying the paths in your test script according to the actual path to the AUT on the current computer.

(For information on how to register the AUT in the Squish IDE manually please see our documentation).

However if you have ten computers, then you need to do this registration step on each of the computers.

In our support work with our customers, we often find it useful to provide the following script snippet to our customers where we specify the path of the addressbook example application directly in startApplication():

startApplication(os.getenv("SQUISH_PREFIX")+ "/examples/qt/addressbook/addressbook")

In this snippet, the function startApplication() is provided the path to the addressbook example application which is retrieved from os.getenv() by reading the environment variable “SQUISH_PREFIX.” (It contains the path to the Squish installation; Squish sets this environment variable for the AUT processes). And, it concatenates the path to the addressbook application in the examples, which is always the same in all Squish binary packages.

The advantage of this approach when sending script snippets to our customers is that they can copy and paste the script snippet into a test case without having to go through the extra step of registering the application first.

You, too, may have cases where you may need to retrieve the path from an environment variable or from a text file or some other source. In such a case, it maybe useful for you to use the above approach.

Or, if you have an application which is in the same path on all of your computers, then you can provide it directly to startApplication():

startApplication("C:\Windows\system32\notepad.exe")

The post Starting Your AUT Using Absolute Paths in startApplication() appeared first on froglogic.

Code Coverage of Multi-Platform Applications

$
0
0

Many applications are now targeting several operating systems. In most cases, the code is similar for each of them and only the toolchain is different (i.e., Visual Studio for Windows, XCode for OS X, gcc for Linux, …) and the library which permits the portability of the code (i.e., C++-STL, Boost, Qt, …).

In general, by choosing the right development environment, it is possible to limit the platform code to a few #if/#endif sections. And in most cases, the unit tests and other automatic tests should be of course re-executed on each target platform but it makes no sense to do an intensive manual test cycle on each of them. It is instead more optimal to split the testing effort over all platforms and try to collect and merge the result together.

An other aspect is that it makes sense to find the differences for each platform and execute a minimal set of dedicated tests for each of them. This approach is interesting for embedded systems: if an application can be executed on a host system through some wrapper, it makes sense to run all unit tests on it, then look on what could not be covered and test this part on real hardware.

In all cases, we need to build our application in different environments, collect the result and be able to identify the differences of the code coverage.

Compiling the Same Code on Two Platforms

Let’s take a small sample composed of two files:

  • One common source which contains only the main() function. This file is common and compiled identically (with the same preprocessor and instrumentation settings) on all platforms.
  • One file which contains some #ifdef statement and which executes different code on each platform. In our case, it prints only a different string. So the source code is the same on all platforms, but the preprocessor generates different assembly code, adapted to each platform.

main.c:

extern void print_information();
int main()
{
print_information();
return 0;
}

information.c:

#include <stdio.h>
void print_information()
{
#ifdef UNIX
printf( "Unix System\n" );
#else
printf( "Other System\n" );
#endif
}

The compilation on Unix can be performed with:

$ csgcc -o app_unix.exe information.c main.c -DUNIX

On Windows, it would be:

$ cscl /Feapp_windows.exe information.c main.c

Then on each platform, it is possible to execute the application and import the coverage:

$ ./app_unix.exe
$ cmcsexeimport -m app_unix.exe.csmes --title="Test on Unix" app_unix.exe.csexe --delete

And the same on Windows:

$ .\app_windows.exe
$ cmcsexeimport -m app_windows.exe.csmes --title="Test on Windows" app_windows.exe.csexe --delete

Merging the Coverage of the two Platforms

The coverage on each platform looks as follows:

Code Coverage on the Windows Platform

The coverage is as expected: the main() function is executed and in the print_information() function, the line compiled for this platform ( ‘printf ( “Other System\n” );‘) is also marked as executed.

What we would like to know is if the line compiled for Unix is also covered (‘printf ( “Unix System\n” );‘ ). For that, the first approach would be to merge app_windows.exe.csmes and app_unix.exe.csmes. The result would be:

Mergin the Coverage from the 2 Platforms

The result is not really what is expected: we see that main.c and information.c appear twice. If we switch to the tree mode of the source dialog, it is clear why this happens: the source files are not in the same directory.

Source with directory after merging.

We then need to tell the CoverageBrowser that these files are identical. For that we use the context menu of the file dialog and click on ‘Rename Sources…’ and replace the directories to match an identical one with a regular expression:

Actual Name: .*[\\/]([^/\\]*)
New Name: src/\1

The expression patches any string which contains a slash or backslash and extracts the ending parts which do not contain one using the expression ([^/\]*). The parenthesis permit to capture the expression with the placeholder \1. (‘1’ is for the first parenthesis expression.) We rename the file names by pre-pending src/ to the file name without the path.

The CoverageBrowser then computes the preview of the rename operation. If there are conflicts or errors, the corresponding lines are highlighted in red. It verifies that if two files have the same name after the transformation, they have also the same source code. So it is not possible to try to rename main.c into information.c.

File Rename Preview

It is permitted that two C++ files have a different pre-processed output but in this case, two versions of the same sources are created. For our sample, this is the case for information.c: the source file is identical, but the pre-processed output is different. That’s why we can see two versions, ‘information.c #1’ and ‘information.c #2’, of the original source ‘information.c’. This also implies that the function print_information() of the file information.c has also two versions.

Merge result after unification of the source files.

This file renaming permits to reduce the number of source files to analyze from 4 (2x main.c and 2x information.c) to 3. This seems not to be a big gain, but in most of the real projects which have more than 99% of the source code identical for each platform this reduction permits to get a complexity of code to analyze near to the single platform version.

Scripting the File Renaming

Squish Coco also permits us to script the file renaming with the tool cmedit. cmedit permits first to list the source files with the switch -l:

$ cmedit -l app_merged.exe.csmes
/net/firewall.vpn/export/TMP/sample/information.c
/net/firewall.vpn/export/TMP/sample/main.c
Z:\sample\information.c
Z:\sample\main.c

We get here the same information as with the CoverageBrowser dialog, we can then rename the sources with the switch -r:

$ cmedit --dry-run -r '.*/\,src/\1,r' app_merged.exe.csmes
'/net/firewall.vpn/export/TMP/sample/information.c' -> 'src/information.c'
'/net/firewall.vpn/export/TMP/sample/main.c' -> 'src/main.c'
'Z:\sample\information.c' -> 'src/information.c'
'Z:\sample\main.c' -> 'src/main.c'

The expressions followed by the -r switch are separated by a comma. The first one is the actual file name to match, the second one is the destination file name and the last one (‘r’) permits us to specify that a regular expression is used.

–dry-run allows us to execute the command without modifying the file, if the result is as expected, and removing it permits us to commit the changes.

Conclusion

With the source file renaming functionality, it is possible to handle binaries generated on several platforms from the same source. The coverage is mixed so that it is possible to execute a shared test suite only on one platform. Squish Coco checks that the sources are identical before renaming the files, thus allowing us to ensure that the operation is performed on the same product release. But on the other hand, it allows that the pre-processed code is different to handle platform-specific compilations.

The post Code Coverage of Multi-Platform Applications appeared first on froglogic.


Using the Squish Function ‘attachFile’ to Your Advantage

$
0
0

Copying test results in the form of files to your test result directory can be automated.

Imagine the following scenario: your application interacts with certain files, and those files change from test to test. For logging purposes, you want to export the files to your test directory. Additionally, the attached file will also be visible in the test report.

Example

Add test.attachFile(“/Path/to/folder/MyAddresses.adr”) to your script:

...
test.compare(table.rowCount, 125)
test.attachFile("/Path/to/folder/MyAddresses.adr")
activateItem(waitForObjectItem(names.address_Book_MyAddresses_adr_QMenuBar, "File"))
...

(The code for this example can be found in squish/examples/qt/addressbook/suite_py/tst_general).

This will result in the copy of “MyAddresses.adr” to the result directory.

With the option --resultdir for the squishrunner you can further specify the direction to copy your reports and attached files to.

In the IDE, you find the option to set the result directory under:

Edit > Preferences > Squish > Logging

Find more information about attachFile in our documentation.

For more information about Squish, visit https://www.froglogic.com/squish/

The post Using the Squish Function ‘attachFile’ to Your Advantage appeared first on froglogic.

Testing .NET Core Applications

$
0
0

Recently, Microsoft announced that .NET Core 3.0 will become .NET 5. The majority of .NET Framework libraries, including the GUI toolkits Windows Forms and WPF, are already ported. Since this seems to be the future of .NET development, we took a closer look and managed to add the initial support for Windows Forms and WPF GUI toolkits. So in the next major release, Squish will have support for applications which use these toolkits on the .NET Core platform.

Introduction

If you already have an application that can run on .NET Core, and you would like to test it with Squish, please let us know, and we will provide a snapshot package for testing. Also, feel free to let us know if you don’t have an application but just would like to try .NET Core sample applications.

Building Examples

Making .NET Core sample applications is easy. You just have to download an SDK from the download page and build an example. The building of a Windows Forms example can be done by simply issuing the following commands in the command line prompt:

<path_to_dotnet_core_sdk>\dotnet.exe new winforms
<path_to_dotnet_core_sdk>\dotnet.exe build
<path_to_dotnet_core_sdk>\dotnet.exe run

Self-Contained Applications

The .NET Core applications can also be built as “self-contained.” That is, they can be built with a “built-in” .NET Core runtime so that the runtime does not need to be present on the machines on which you want to run the application. This is a nice option when you don’t want your users to install a new runtime or until .NET Core gets released and is more widespread.

To make the sample above “self-contained,” add the following line…

<RuntimeIdentifiers>win-x64</RuntimeIdentifiers>

…into the sample’s project file winforms.csproj, under the PropertyGroup element. After this, build the sample using the publish command:

<path_to_dotnet_core_sdk>\dotnet.exe publish -c release -r win-x64 -o app

The application, and all that is needed for it to run on other computers, will be put into the folder app. You should be able just to copy (or zip/unzip) this folder onto another machine and the application should work out-of-the-box.

Testing with Squish

With this and similar applications, you should be able to use Squish in the same way as you did before with the standard .NET Framework applications: register the executable as the Application Under Test (AUT) and start recording your test scripts!

The post Testing .NET Core Applications appeared first on froglogic.

Verifying the Visual Order of GUI Elements

$
0
0

This tip demonstrates how to verify that GUI elements are properly arranged from left to right and top to bottom.

Verifications in Squish

The Squish GUI Tester has several ways to do functional tests of Graphical User Interfaces. One recurring scenario is verifying the layout of GUI elements. A proper layout is usually defined by the following two requirements:

  • UI elements do not overlap and thus are fully visible to the user
  • UI elements are aside or under each other, meaning that they appear in a certain order that does not change accidentally

Squish supports verifications of objects in three major ways. However, out of the box, neither of them are a perfect fit for verifying the layout and order of GUI elements.

Property Verification does not catch relations between multiple GUI elements. Verifying a property captures a single aspect of a GUI element only, and it compares an expected with an actual property value instead of expressing a relation.

Screenshot Verification of whole dialogs to a certain degree does verify geometry and how the geometry of GUI elements relate to each other because the geometry and layout are part of the visual representation. However, screenshots are often not very portable between computers or operating systems. Fonts, colors and object sizes often differ from one setup to the next and ensuring they don’t is a whole topic on its own. In addition, any change to a GUI element can potentially change the visual representation even if the layout and the relation to other GUI elements is not affected. So to sum it up, screenshots are usually too strict.

Visual Verification combines screenshot, object hierarchy, geometry as well as a selected set of properties. Again, that may be too strict for the task at hand. Some toolkits have different element hierarchies between operating systems or toolkit versions. On the other hand a Visual Verification does not necessarily guarantee that GUI elements are laid out on screen in a certain way. The relation of GUI element geometries may not be expressed by either hierarchy or the set of verified properties.

Verifying Order and Layout

The proposed solution to verify the order as well as the layout of GUI elements is to look up a list of objects by name and fetch their global screen coordinates. Fetching screen coordinates is possible in Squish via the object.globalBounds() function:

o = waitForObjectExists("{type='Button' text='Open'}")
bounds = object.globalBounds(o)

The list of objects to fetch should be predetermined inside the test script to ensure a certain order of objects. Element names for the list of objects can be fetched via the Squish IDE by interactively picking the objects and copying their object name. Verifying the layout of a bunch of objects can be as simple as a single function call that receives the list of objects to verify:

validateLayout("Validating toolbar layout",
               [names.address_Book_New_QToolButton,
                names.address_Book_Open_QToolButton,
                names.address_Book_Save_QToolButton,
                names.address_Book_File_QTableWidget,
               ])

Screen coordinates are compared between adjacent GUI elements. For each pair of GUI elements we can verify whether they are aside or beneath each other and also ensure that they do not overlap. The following function implements a simple coordinate and overlap check and expects a left-to-right layout:

def fitsLeftToRightLayout(newBounds, oldBounds):
    # new object must start below the top border of the old one
    if newBounds.y >= oldBounds.y:
        oldBottom = oldBounds.y + oldBounds.height
        # new object must start below the bottom border of the old one
        if newBounds.y >= oldBottom:
            return True
        oldRight = oldBounds.x + oldBounds.width
        # new object must start somewhere to the right of the old one
        if newBounds.x >= oldRight:
            return True
    return False

In case a layout problem is detected, the logic can simply log the object name that had a mismatch. For interactive use, the highlightObject() function of Squish can be used to visually highlight mismatches. The complete implementation that does this and calls the above layout check can be seen here:

def validateLayout(title, objectNames):
    test.startSection(title)
    try:
        lastObject = waitForObjectExists(objectNames[0])
        lastBounds = object.globalBounds(lastObject)
        for name in objectNames[1:]:
            newObject = waitForObjectExists(name)
            newBounds = object.globalBounds(newObject)
            fits = fitsLeftToRightLayout(newBounds, lastBounds)
            if not test.verify(fits,
                               "Object fits into layout: {0}".format(name)):
                highlightObject(lastObject)
                highlightObject(newObject)
            lastObject = newObject
            lastBounds = newBounds
    finally:
        test.endSection()

Outlook and Improvements

The above script code is kept simple to make it easier to understand as part of this blog entry. There are several ways how it could be improved further.

Logging in case of mismatches could be done in other ways, including logging of application specific properties, taking desktop screenshots or by saving an object snapshot of the surrounding dialog or window via saveObjectSnapshot(). Especially the latter two would allow for a more thorough post-analysis.

Other layout directions could be tested in a similar way. The code currently expects a left-to-right layout. Furthermore layout mechanisms that align objects on their center may cause mismatches. The logic would have to be extended to take such center/middle alignments into account also.

The current code compares single geometry properties only. For more advanced checks, using a more capable geometry type can help keep the logic readable and make it more expressive. Comparisons like rectangle.isLeftOf(otherRectangle) or rectangle.center.y == otherRectangle.center.y can help keep complex checks readable.

A fully working example that verifies two layout aspects on the addressbook example application of Squish for Qt can be downloaded as suite_layout_validation_py.zip.

The post Verifying the Visual Order of GUI Elements appeared first on froglogic.

Measuring Code Coverage on Devices with Limited Memory

$
0
0

In this article, we would like to explore the possibilities that are available to a Squish Coco user if he or she wants to use the tool on devices with limited memory.

It is relatively clear that things don’t get better if we have less memory. Therefore, the whole discussion is centered around the question of: what can we do to “suffer less” because of the absence of additional (to whatever memory we have) memory. Let’s start then…

Optimization of Memory Consumption: Main Considerations

From the instrumentation of the code comes an additional memory overhead. This fact is rather unfortunate, but the instrumentation is an essential part of Squish Coco, which makes possible more precise results compared to other (i.e., not involving instrumentation) techniques. And, of course, by definition, an instrumentation cannot avoid use of certain additional memory.

We can, however, choose, by fine-tuning the CoverageScanner (the part of the Squish Coco suite responsible for instrumentation), which features to use and to what extent they are necessary for us, and which features are more of a luxury. Of course, we are probably not willing to get rid of some luxury for free. Thus, it is also important to understand the impact which every discussed feature has on memory consumption.

There are three main things we can do to reduce the memory consumption drastically:

  1. Use a more basic coverage level (more on this later)
  2. Measure ‘hits’ instead of ‘counts’
  3. Reduce the range of the counter used for counts

We start with the second and third options:

Using ‘Hits’ Instead of ‘Counts’

By default, the CoverageScanner uses counters to make it possible to count how many times a specific part of the code was executed.

A counter is a variable (which is inserted into the code during instrumentation), where we keep a current count. So, if we want to be able to track up to 255 executions of one particular piece of code, we have to have a counter of a size of at least 1 byte.

On the other hand, it may be quite satisfactory just to know that the piece of code was executed at least once, meaning we had a ‘hit’. Of course, we still have to store this into a memory using a variable, but a hit requires less memory to store information (because it is only binary) and (if you think about that) less memory to keep track of, because we don’t need to ‘count’, so we do not need additional operations in the code.

Now, you can imagine how many such variables (for ‘hits’ or for ‘counts’) we have to have in an average program. It is approximately number of blocks (functions, loop bodies, etc.) + number of branches (e.g. if … then … else … branches).

We use –cs-hit as an option when compiling with CoverageScanner and get ‘hits’ instead of ‘counts’.

Reduce the Range of the Counter Used for ‘Counts’

As of the date of this article, Squish Coco supports 1, 4 and 8 byte counters, meaning they have ranges of 28, 232 and 264, respectively. Furthermore, the default value is 8 (range 264). As you may see, 232, for instance, is approximately 4 billion executions and 264, obviously, 4×4 billion billion executions. So, without much of a doubt, we may reduce 4×4 billion billion to “only” 4 billion which gives us 4 bytes of memory for every counter variable.

We use an option –cs-counter-size=<number>, where <number> is either 1 or 4 to adjust the size of the counter used. Of course, it is only useful if you do not already have the –cs-hit option.

Use a More Basic Coverage Level

We can list coverage levels provided by Squish Coco in the ascending order from ‘basic’ to ‘advanced’ as follows:

  1. Statement Block Coverage
  2. Decision Coverage (includes Statement Block Coverage)
  3. Condition Coverage (includes Decision Coverage)
  4. Modified Condition/Decision Coverage, or MC/DC
  5. Multiple Condition Coverage, or MCC

These levels have a direct impact on memory consumption: a higher level means greater memory consumption.

Please note that every level includes Statement Block coverage, so it is an absolute minimum. Other levels add some quality together with some overhead, including, of course, memory overhead.

  • Statement Block Coverage: we count blocks of code statements that are executed.
  • Decision Coverage: every decision regardless of how many conditions were involved in ‘making’ a decision.
  • Condition Coverage: Decision Coverage + every condition, separately. ‘Separately’ means that we are not taking into account combinations of the conditions, we are just interested in knowing if every condition was TRUE and FALSE and (possibly) how many times. We cannot guarantee here for any particular condition which was tested that it influenced a decision.
  • MC/DC: for every condition at least minimum number of combinations with other conditions that can affect a decision.
  • MCC: every possible combination of conditions is taken into account.

To track all combinations, the CoverageScanner uses, for every decision, a truth table formed with lines consisting of particular combinations of conditions.

Now, it is clear that the use of a more primitive coverage level will help us to reduce the size of the memory used. For example, instead of using MC/DC, we may use a Condition Coverage.

However, there is an even better option:

As mentioned before, the CoverageScanner uses tables to track conditions for MC/DC and MCC coverage levels. Default number of lines in the tables is 8000. This can be changed with the help of the options –cs-mcc-max=<size> and –cs-mcdc-max=<size>. They set maximum number of lines in the table for MCC and MC/DC, respectively, but the most important part is the fact that if a particular decision requires a table with the <size> greater than the set maximum, CoverageScanner falls back to a condition coverage for that decision. In other words by using this option we can (partly) preserve a quality level, but, at the same time, reduce memory consumption.

There is another option which may be considered as separate, but we think it is best to mention it here while discussing coverage levels. It is called a partial instrumentation.

Consider a typical if-statement: if (a == true) doSmth(). We have an ‘action’ for ‘condition is true’, but since we have no ‘else’, there is in fact nothing for ‘condition is false’. In some situations, it is not a great evil to think of ‘no action’ as effectively non-existent. Thus, we can spare some counting (hence, reduce memory consumption) for such statements because we will count only ‘condition is true’.

Please be aware that the justification of partial instrumentation strictly depends on code because it is very much possible that the absence of some ‘action’ is crucial.

Please use the following options for CoverageScanner

  • –cs-statement-block: enable statement block coverage
  • –cs-decision: enable decision coverage
  • –cs-condition: enable condition coverage (Default)
  • –cs-mcc-max=<size>: set maximum size of the instrumentation tables used for MCC if enabled via –cs-mcc
  • –cs-mcdc-max=<size>: set maximum size of the instrumentation tables used for MC/DC if enabled via –cs-mcdc
  • –cs-partial-instrumentation: enable partial instrumentation

More Options to Reduce Memory Consumption

The three options described above are not the only possible ways to reduce memory consumption when working with Squish Coco.

a) Disable performance counters which are ENABLED by default. If no performance measurement is needed, some counters can be saved.

Please use –cs-no-execution-time option for that.

b) Adjust the output buffer size. There are files with the name of the form <toolname>.cspro. They are responsible for profiles of the building tools (“compilers” — please see Squish Coco Documentation for further details). This file is very useful for adjusting Squish Coco to our needs and to our limitations. In case of reducing memory consumption, we recommend to adjust in the file OUTPUT_BUFFER_SIZE=<size>, where size is the size of the internal buffer used for generating an execution report. A higher value means potentially better performance, but at the cost of memory.

We typically recommend a size value of 64 for platforms with limited memory.

c) Use the –cs-minimal-api CoverageScanner option. In this case, the CoverageScanner API has fewer dependencies to external libraries than the default API. Among other things, it should help to reduce memory consumption.

d) Use –cs-combine-switch-cases. This option allows the CoverageScanner not to distinguish between case labels that lead to the same code (fall-through). So, if it is enabled, we do not need some counters.

Options to Be Avoided on Devices with Limited Memory

A few words about what not to use or enable. The latter implies that we are talking about features which are not available by default.

DO NOT use the option –cs-record-empty-functions which allows CoverageScanner to instrument functions with an empty body.

Summary

USE

(in order of descending importance, i.e., impact on memory consumption):

1)

FIRST ALTERNATIVE:

either –cs-condition (Default)

or –cs-decision

or –cs-statement-block (not recommended)

to set a coverage level with less than a maximum memory consumption.

OR

SECOND ALTERNATIVE:

either –cs-mcc-max=<size> or –cs-mcdc-max=<size>

to set the maximum size of the instrumentation tables used for MCC or MC/DC, respectively (with potential fallback to condition coverage)

2) either –cs-hit

or –cs-counter-size=<number>, where <number> is one of 1 or 4 to reduce size of the counter used.

3) –cs-partial-instrumentation

to enable partial instrumentation

4) –cs-no-execution-time

to disable performance counters

5) OUTPUT_BUFFER_SIZE=<size> (e.g. OUTPUT_BUFFER_SIZE=64) in ‘.cspro’-file to adjust (reduce) the size of the internal buffer used for generating an execution report.

6) –cs-minimal-api

7) –cs-combine-switch-cases

DO NOT USE

(in order of descending importance i.e. impact on memory consumption):

1) –cs-mcc or –cs-mcdc

UNLESS accompanied by –cs-mcc-max=<size> or –cs-mcdc-max=<size>, respectively.

2) –cs-keep-instrumentation-files

3) –cs-record-empty-functions

The post Measuring Code Coverage on Devices with Limited Memory appeared first on froglogic.

Running GUI Tests on Each Commit in GitLab CI/CD

$
0
0

In a recent article, we wrote about running Squish tests on merge requests in GitLab. In this article, we present a solution to run Squish tests on each commit.

Overview

We invoke the squishrunner command to execute tests and generate JUnit and web reports. Additionally, squishrunner is called with a --exitCodeOnFail switch so it returns a custom exit code (nonzero) if any of the test cases have failed, and zero otherwise. Therefore, GitLab is able to set the job status accordingly.

Runner Settings

The first step is to configure the GitLab Runner. We will configure the following environment variables:

  • SQUISH_DIR – Squish installation directory
  • SQUISH_LICENSEKEY_DIR – Squish license location
  • SQUISH_SERVER_PORT – Port at which the squishserver process is started
  • DISPLAY – Display used to display an application GUI. We use the VNC Server to provide a headless display.

Job Configuration

The next step is to define a pipeline in the file .gitlab-ci.yml. Pipelines are defined by specifying jobs that run in stages. In our example, we define a job named “squish-tests1” which is run in a test stage.

squish-tests1:
    stage: test
    script: 
    - echo DISPLAY=$DISPLAY
    - echo "Starting VNC Server"
    - vncserver :$DISPLAY_NO
    - echo $SQUISH_DIR
    - echo "Starting squishserver on port=$SQUISH_SERVER_PORT..."
    - $SQUISH_DIR/bin/squishserver --port $SQUISH_SERVER_PORT 1>server.log 2>&1 &
    - sleep 5
    - echo "Register AUT..."
    - $SQUISH_DIR/bin/squishserver --port $SQUISH_SERVER_PORT --config addAUT addressbook $SQUISH_DIR/examples/qt/addressbook
    - echo "Starting tests..."
    - $SQUISH_DIR/bin/squishrunner --port $SQUISH_SERVER_PORT --testsuite /home/tomasz/suites/suite_PageObjects --exitCodeOnFail 13 --reportgen junit,junit_report.xml --reportgen html,web_report
    after_script:
    - echo "Stopping squishserver..."
    - $SQUISH_DIR/bin/squishserver --stop --port $SQUISH_SERVER_PORT &
    - echo "Stopping VNC Server..."
    - vncserver -kill :$DISPLAY_NO
    - sleep 5
    artifacts:
       when: always    
       reports: 
          junit: junit_report.xml
       paths:
         - server.log
         - junit_report.xml
         - web_report/

The job performs the following actions:

  1. Start squishserver and redirect its stdout and stderr output to a server.log file
  2. Register the Application Under Test (AUT)
  3. Call squishrunner to run the test suite and generate a JUnit report and HTML report and set --exitCodeOnFail 13 setting
  4. Stop squishserver
  5. Collect artifacts. We use the setting when: always, so artifacts are collected regardless of job status (by default GitLab only collects artifacts on successful job executions, which is not good in our case, as we need an HTML report to analyze the cause of failures).

Example

When committing a change to the AUT, jobs defined in the pipeline script are started. After the application is built (not covered in our pipeline example), the test stage is executed. As part of this stage, GUI tests are executed using the Squish GUI Tester. The below screenshot shows the console output from test execution.

After test execution, artifacts, including the HTML report, are uploaded. To analyze detailed results in the HTML report, you need to select Download in the Job Artifacts view and open web_report/index.html in a web browser.

The post Running GUI Tests on Each Commit in GitLab CI/CD appeared first on froglogic.

Viewing all 398 articles
Browse latest View live