Quantcast
Channel: froglogic
Viewing all 398 articles
Browse latest View live

Squish tip of the week: Resizing a docked window

$
0
0

Resizing a docked window isn’t always as simple as it may seem.
Docked windows often change height and width as well as their docked location. The control or widget can also be more complex, not resizing with a simple MouseDrag used with non-docked windows.

The example below illustrates how to resize a docked window when working with the Qt Widget QDockWidget:

# About Example
# Supply specific QDockWidget's symbolic or real name
# (in this sample application QDockWidget is ColorSwatch)
# change_* functions take three parameters: QDockWidget
# as o, resize value as xdiff, snooze in seconds
# (optional) as snoozeFactor
 
def main():
    startApplication("mainwindow")
 
    dockToResize = waitForObject(":Qt Main Window Demo.White Dock Widget [*]_ColorSwatch")
 
    change_height_on_top(dockToResize, -20)
    change_height_on_top(dockToResize, 20)
    change_height_on_bottom(dockToResize, -20)
    change_height_on_bottom(dockToResize, 20)
    change_width_on_left(dockToResize, -20)
    change_width_on_left(dockToResize, 20)
    change_width_on_right(dockToResize, 20)
    change_width_on_right(dockToResize, -20)
 
def change_height_on_top(o, xdiff, snoozeFactor = 0):
    snooze(snoozeFactor)
    mousePress(o, 50, -2, MouseButton.LeftButton)
    start = 0
    end = xdiff
    step = 1
    if xdiff < 0:
        step = -1
    for i in range(start, end, step):
        mouseMove(o, 50, -2 + i)
    mouseRelease()
    snooze(snoozeFactor)
 
def change_height_on_bottom(o, xdiff, snoozeFactor = 0):
    snooze(snoozeFactor)
    mousePress(o, 50, o.height + 2, MouseButton.LeftButton)
    start = 0
    end = xdiff
    step = 1
    if xdiff < 0:
        step = -1
    for i in range(start, end, step):
        mouseMove(o, 50, o.height + 2 + i)
    mouseRelease()
    snooze(snoozeFactor)
 
def change_width_on_left(o, xdiff, snoozeFactor = 0):
    snooze(snoozeFactor)
    mousePress(o, -3, 50, MouseButton.LeftButton)
    start = 0
    end = xdiff
    step = 1
    if xdiff < 0:
        step = -1
    for i in range(start, end, step):
        mouseMove(o, -3 + i, 50)
    mouseRelease()
    snooze(snoozeFactor)
 
def change_width_on_right(o, xdiff, snoozeFactor = 0):
    snooze(snoozeFactor)
    mousePress(o, o.width + 3, 50, MouseButton.LeftButton)
    start = 0
    end = xdiff
    step = 1
    if xdiff < 0:
        step = -1
    for i in range(start, end, step):
        mouseMove(o, o.width + 3 + i, 50)
    mouseRelease()
    snooze(snoozeFactor)

For more information, and a more extensive example, see Article – Resizing Docked Windows (QDockWidget)


Squish tip of the week: Create tests involving multiple AUTs

$
0
0

Squish can create and execute against tests against multiple Applications Under Test (AUT)

Switch between applications (for recording or playback) using Application Context.

Let’s say you are testing a chat application
  • Two chat sessions are interacting.
  • Even if one session is a Desktop application and the other is a Mobile App – it’s all possible!
Learn more here:

froglogic_cropped

Squish tip of the week: Reuse tests, scripts and test resources

$
0
0

What makes your test framework shine? Its re-usability of common actions and resources

All scripts, test data, verification points and even gestures in your Test Suite Resources are available to all Test Cases in your Test Suite.

Test Suite

It doesn’t stop there…

With Squish’s real-world scripting language support of Python, JavaScript, Perl, Tcl and Ruby, your scripts, even those independent of Squish, are also available using Squish’s Global Script view.

Global Script View Real-World Examples

Squish tip of the week: Automate Business Rule Validation

$
0
0



Applications often have a set of business rules; rules that govern how an application should react based on a given set of input or actions. Or as Wikipedia defines it:

A business rule is a rule that defines or constrains some aspect of business and always resolves to either true or false. Full definition

Validate your application’s business rules using data-driven tests

Take a simple set of steps, perhaps even a Snippet of a Test Case, let’s say lines 7 – 10 in the following example:

def main():
    startApplication("AddressBookSwing.jar")
    activateItem(waitForObjectItem(":Address Book_JMenuBar", "File"))
    activateItem(waitForObjectItem(":File_JMenu", "New..."))
    activateItem(waitForObjectItem(":Address Book - Unnamed_JMenuBar", "Edit"))
    activateItem(waitForObjectItem(":Edit_JMenu", "Add..."))
    type(waitForObject(":Address Book - Add.Forename:_JTextField"), "sam")
    type(waitForObject(":Address Book - Add.Surname:_JTextField"), "smith")
    type(waitForObject(":Address Book - Add.Email:_JTextField"), "sam@smith.com")
    type(waitForObject(":Address Book - Add.Phone:_JTextField"), "123.123.1234")
    clickButton(waitForObject(":Address Book - Add.OK_JButton"))
Ask yourself (or better yet, your team):
  • What are the valid input values in each of these fields?
  • What values are not permitted in each of these fields?
  • Do the fields have any minimum character requirements?
  • Any maximum character requirements?
  • What should display in the event any of these requirements are not met? And when should it display?

Given answers to the above set of questions, you can begin compiling a collection of data to validate the business rules.

Business Rules Data Table
field input result_details comments
1 Forename sam Expected Result: input accepted without error
2 Forename s@m Special characters not permitted in Forename field Expected Result: Warning message appears immediately
3 And so on
Modify your Test Case to use the data

Use the Make Code Data Driven wizard to give yourself a jump start.

click to zoom

Then

  1. Update the text field to use the related variable(s) and
  2. Add a verification point to validate the expected result
Updated example

def main():
    startApplication("AddressBookSwing.jar")
    activateItem(waitForObjectItem(":Address Book_JMenuBar", "File"))
    activateItem(waitForObjectItem(":File_JMenu", "New..."))
    activateItem(waitForObjectItem(":Address Book - Unnamed_JMenuBar", "Edit"))
    activateItem(waitForObjectItem(":Edit_JMenu", "Add..."))
    
    for record in testData.dataset("businessRules.tsv"):
        field = testData.field(record, "field")
        input = testData.field(record, "input")
        result_details = testData.field(record, "result_details")
        comments = testData.field(record, "comments")
        textField = waitForObject(":Address Book - Add.%s:_JTextField" % field)
        textField.setText("")
        type(waitForObject(textField), input)
        waitFor("object.exists(':Address Book - Add.%s_Warning:_JLabel' % field)", 20000)
        test.compare(findObject(":Address Book - Add.%s_Warning:_JLabel" % field).text, result_details, comments)

Squish tip of the week: How to find answers to your Squish questions

$
0
0

There’s a wealth of Squish & Automated GUI Testing information at your finger tips. Sometimes the key is simply knowing where to look!

Visit our Squish Resources page

Squish tip of the week: Bring window to foreground

$
0
0

When working with multiple applications, or multiple windows in a single application, you can tell Squish to bring the desired window to the foreground before working with the window.

This applies to many different technologies or toolkits.

For example, when working with NSWindow on Mac OS X (Cocoa), given the name or title of the window, you can do the following:

def main():
    startApplication(...)
    ...

    objName = "{title='MyApp Window #1' type='NSWindow'}"
    waitFor("object.exists(\"%s\")" % objName, 20000)
    nsw = findObject(objName)
    nsw.makeKeyAndOrderFront_(nsw)

Read more about this Mac OS X Cocoa example, or other examples in the following Knowledgebase articles:

Remember; we’re always adding to our knowledgebase and other online resources to provide you with the most current and helpful information!

froglogic_cropped

Squish tip of the week: How to quickly identify test failures vs flaky tests

$
0
0

Do you have tests that fail randomly? Or maybe just periodically?

Categorizing Test Failures

Identifying and separating test failures related to defects versus flaky tests isn’t always as straight forward as it may sound.

Why important?

Time. Exponentially impacted time.

When test failures are not handled intelligently, manually troubleshooting and distinguishing the two can lead to a time engulfing maintenance nightmare.

What now?

Re-run failed tests multiple times. Either using your automated batch, CI process or directly from your scripts, logging then in the results the failure occurrence rate.

  • Tests which fail should be run at least three times (or select an alternate threshold, with the goal of increasing this number over time; for example to 1/5 or 1/10).
  • If the test passes just one of the three times, then it should be flagged to be further investigated for improvement, keeping in mind, it may indeed be the AUT, and not the test.
  • If a test fails greater than a third of the time, then investigate further as a software bug or defect.
Keep in mind…

tests are most effective when produced with a specific goal in mind, not a series of items to validate along the way. Having longer tests with various dependencies creates a more difficult to maintain framework. This also lengthens the time investment required to identify the source of each issue – exactly where, what went wrong. As your test framework grows, the maintenance investment grows.

Think in terms of a Behavior Driven Test:

  1. Given x
  2. When y
  3. Then z

Too many Given’s and When’s can muddy the waters – making it more time consuming to pin point the root cause of the issue.


Start small, start smart, refine… then expand!

Remember; we’re always adding to our knowledgebase and other online resources to provide you with the most current and helpful information!

froglogic_cropped

Squish tip of the week: How to get Windows process information


Squish 6.0 Beta with fully integrated BDD support released

$
0
0

About two years after the release of Squish 5.0, we are proud and excited to make available a BETA of Squish 6.0 to you.

The main new features of this release are fully integrated support for Behavior Driven Development and Testing (BDD) as well as major improvements to Squish’s reporting capabilities.

You can read the full announcement at http://www.froglogic.com/news-events/index.php?id=squish-6.0.0-beta-released.html.

A quick video introduction to Squish 6.0 and BDD can be found at http://youtube.com/embed/62Vrnb21hio?vq=hd1080&autoplay=1&rel=0&showinfo=0&autohide=1.

We will also host a live webinar showcasing what’s new in Squish 6.0 shortly. You find details about this in the above announcement.

We are looking forward to your feedback which we happily accept at squish@froglogic.com.

Squish tip of the week: How to automate your BDD test scenarios

$
0
0

Did you know that you can automate your existing BDD test scenarios using Squish 6.0?

Have any existing scenarios in Gherkin? If not, no worries, creating them is as easy as writing in your native language.

Automate your BDD scenarios in 3 easy steps:
  1. Copy and paste (or write) the Gherkin scenario in a Squish BDD Test Case
    Feature: Valid conversion
    
        Scenario Outline: Convert meter in centimeter
            Given the Unit Converter is running
            When I enter 378.9
            And choose to convert from Meters
            And choose to convert to Centimeters
            And click Convert
            Then 37890 should be displayed in the result field
    
  2. Click Record
  3. Follow the Control Bar as it walks you through recording each step in the scenario

    click to zoom

That’s it! Your BDD scenario is automated. Just click Run to play it back!

Watch this 10 minute video to learn more

Or request an evaluation of Squish 6.0 Beta, and follow the online tutorials:

BDD tutorials by Squish Edition

  1. Select your Squish edition-specific tutorial (i.e. Qt, Android, Java, etc.) from this list of tutorials
  2. Click Tutorial: Designing Behavior Driven Development (BDD) Tests

Don’t forget to sign up for our upcoming webinars:




Squish tip of the week: How to create cross-platform BDD tests

$
0
0

BDD tests can span multiple platforms. Watch this great video on how you can execute BDD tests across multiple platforms.

Learn how in less than 10 minutes!

Squish tip of the week: Using variable values in the Squish Script Console

$
0
0

Segmenting from a past blog post, Scripting with the help of the Squish Script Console, an enhancement to the Squish Script Console in 6.0 makes current variables and their values available from the console.

Now manually scripting or troubleshooting your scripts are that much easier!

The example below illustrates using the record variable value in a statement within the Squish Script Console to validate expected output and syntax.


Click to zoom


Learn more




Squish tip of the week: What to automate next? How to best expand your automated test suite

$
0
0

Once you’ve established an initial test framework – perhaps you even followed the Squish Tip of the Week: Where to start? – how do you determine where to continue expanding the automated test suite?

New features! Features not even in Development (yet).

  1. As a team, write the new feature in Gherkin. That becomes your initial test.
  2. Automate the feature (yes, even before it’s implemented – see diagram’s Writing a failing test step)
  3. Development implements the feature from the same Gherkin feature file
  4. You make the automated test pass
  5. Repeat
Behavior Driven Development and Testing

click to zoom

If you haven’t already, request an evaluation of Squish 6.0beta, and begin automating features sooner with the introduction of BDD automation.


Learn more




Squish tip of the week: How to handle tests requiring user input

$
0
0

While the goal is automating a test from end to end, there are those (hopefully) occasional circumstances where human interaction isn’t avoidable.

How about tests where a password is required?What if I have to manually configure something as part of a test?

Use Squish’s testInteraction Functions to allow user-input points during playback.

Examples using testInteraction functions

The following example uses a simple Java Swing application available for download from http://www.codejava.net/download-attachment?fid=154, or if you prefer to download directly from our blog: SwingJPasswordFieldDemo

Pause for input

As you can read more about here, testInteraction has many functions available. The example below demonstrates pausing, providing the end user with a message, and an OK button for continuing on line 7.

function main() {
    startApplication("SwingJPasswordFieldDemo.jar");
    clickButton(waitForObject(":Swing JPasswordField Demo Program.OK_JButton"));
    waitFor("object.exists(':Message.Wrong password!_JLabel')", 20000);
    test.compare(findObject(":Message.Wrong password!_JLabel").text, "Wrong password!");
    clickButton(waitForObject(":Message.OK_JButton"));
    testInteraction.information("Please enter the password and click OK on this dialog to continue.");
    clickButton(waitForObject(":Swing JPasswordField Demo Program.OK_JButton"));
    waitFor("object.exists(':Message.Congratulations! You entered correct password._JLabel')", 20000);
    test.compare(findObject(":Message.Congratulations! You entered correct password._JLabel").text, "Congratulations! You entered correct password.");
    clickButton(waitForObject(":Message.OK_JButton"));
}

Use password (input) in script

You can also provide input for use in a script, mask* the input from the screen if it’s a password, however, the data is not masked when used in the script (meaning, it becomes a variable value). See lines 6, 9 and 11. Also common is using testInteraction.input() when the information inputted is not a password (testInteraction.password is new in 6.0).

function main() {
    startApplication("SwingJPasswordFieldDemo.jar");
    clickButton(waitForObject(":Swing JPasswordField Demo Program.OK_JButton"));
    waitFor("object.exists(':Message.Wrong password!_JLabel')", 20000);
    test.compare(findObject(":Message.Wrong password!_JLabel").text, "Wrong password!");
    password = testInteraction.password("Please enter the password");
    clickButton(waitForObject(":Message.OK_JButton"));
    mouseClick(waitForObject(":Swing JPasswordField Demo Program.Enter password:_JPasswordField"), 35, 2, 0, Button.Button1);
    type(waitForObject(":Swing JPasswordField Demo Program.Enter password:_JPasswordField"), password);
    mouseClick(waitForObject(":Swing JPasswordField Demo Program.Confirm password:_JPasswordField"), 40, 3, 0, Button.Button1);
    type(waitForObject(":Swing JPasswordField Demo Program.Confirm password:_JPasswordField"), password);
    clickButton(waitForObject(":Swing JPasswordField Demo Program.OK_JButton"));
    waitFor("object.exists(':Message.OK_JButton')", 20000);
    test.compare(findObject(":Message.OK_JButton").text, "OK");
    waitFor("object.exists(':Message.Congratulations! You entered correct password._JLabel')", 20000);
    test.compare(findObject(":Message.Congratulations! You entered correct password._JLabel").text, "Congratulations! You entered correct password.");
    clickButton(waitForObject(":Message.OK_JButton"));
}

Check for interactive state

You can also create tests which will continue without the manual input, although you’ll want to consider if it’s possible to continue for each scenario. The following scenario allows the script to continue, but it will always fail on the next step, as the manual input is required to proceed. See lines 7 through 12.

function main() {
    startApplication("SwingJPasswordFieldDemo.jar");
    clickButton(waitForObject(":Swing JPasswordField Demo Program.OK_JButton"));
    waitFor("object.exists(':Message.Wrong password!_JLabel')", 20000);
    test.compare(findObject(":Message.Wrong password!_JLabel").text, "Wrong password!");
    clickButton(waitForObject(":Message.OK_JButton"));
    if (testInteraction.isAvailable()){
        testInteraction.information("Please enter the password and click OK on this dialog to continue.");
     }else{
        test.warning("This script requires the \-\-interactive option when run from the command line.");
        test.log("Trying to proceed with test without manual input");
     }
    clickButton(waitForObject(":Swing JPasswordField Demo Program.OK_JButton"));
    waitFor("object.exists(':Message.Congratulations! You entered correct password._JLabel')", 20000);
    test.compare(findObject(":Message.Congratulations! You entered correct password._JLabel").text, "Congratulations! You entered correct password.");
    clickButton(waitForObject(":Message.OK_JButton"));
}

More examples are available here including:

  • testInteraction.question
  • testInteraction.warning


Learn more

froglogic_cropped

Squish tip of the week: Isolating Setup from Test Objectives

$
0
0

Isolating the setup, or test pre-conditions, from the objective of the test results in clearer and more accurate testing results.

Remember, a good test case should run without the need to first run a separate test case.

Really? How is that possible if the results of one test are needed for the next test?

Consider the test’s objective

Adding a patient visit logs the visit details in the patient history

Ask yourself

In order to add a patient visit, what must also exist? A patient record.

Verifying the creation of a patient record however is not the objective of the test; however, a patient record must exist prior to a patient visit log being entered.

The patient record then must be part of the test setup. Should an issue occur during the setup, the test should be terminated, as running the test and producing a failed result would be misleading. Such results take longer to filter through and pin-point the true reason for the failure – impacting time now, as well as long term metrics. In the following scenario, if the Given + And do not execute successfully, then the result should log a failure in the setup an not the test’s objective.

Sample Scenario

Scenario: Patient visit logs appear in patient’s history
   Given the Patient Portal is running
   And the Patient Record exists
   When I log a Patient Visit
   Then the Patient Visit appear in the Patient History

That does not mean that I never test (1) starting the application or (2) creating a patient record – those would simply be different scenarios or features in separate tests.

How can I avoid having duplicate scripts or test case steps?

By refactoring and breaking apart tests into segments, functions or implementation steps.

The Given and the And statements in the scenario (above) can use shared (or existing) functions…functions also used for scenarios which validate the Patient Portal application can run and the ability to create a Patient Record.

Scenario: The Patient Portal application launches
   Given the Patient Portal is installed
   When the Patient Portal launches
   Then the Patient Portal dashboard should be visible

Scenario: A new patient record exists after adding one entry
   Given the Patient Portal is running
   When I create a new Patient Record
   Then the new Patient Record should be visible in the Patient list
   And the Patient History should be blank

Each of the statements in the scenarios above pertain to a set of automated steps, callable functions, using either or a combination of the following two approaches:

  • Script Test Cases – traditional scripts written in python, perl, javascript, tcl or ruby
  • or BDD Test Cases – which contain implementation steps, or an automation layer, written in python, perl, javascript, tcl or ruby

Learn more

froglogic_cropped


Squish 6.0 with fully integrated BDD support released!

$
0
0

About two years after the release of Squish 5.0, we are proud and excited to make Squish 6.0 available to you.

The main new features of this release are fully integrated support for Behavior Driven Development and Testing (BDD) as well as major improvements to Squish’s reporting capabilities.

You can read the full announcement at http://www.froglogic.com/news-events/index.php?id=squish-6.0.0-released.html.

A quick video introduction to Squish 6.0 and BDD can be found at http://youtube.com/embed/62Vrnb21hio?vq=hd1080&autoplay=1&rel=0&showinfo=0&autohide=1.

We will also host a live webinar showcasing what’s new in Squish 6.0 on Wednesday, September 9th. Register here.

We hope you take advantage of the new features and as always look forward to your feedback. For feedback, email squish@froglogic.com.

Squish tip of the week: Use grouping and filtering your Squish HTML reports

$
0
0

The new HTML reports in Squish 6.0 make pinpointing failures and drilling down into details much easier.

  • View a collection of reports from one or more test suites or executions
  • Tree-style navigation
  • Filter by Pass / Fail
  • Hide or display logs
  • Tool tip details on mouse over
  • Drill down to a single report
  • Pass percentage automatically calculated
  • Ability to group the pass / fail status of sections in a given test case

Try them out!

  1. Download this example report
  2. Or try it yourself using the –reportgen html option

Example

click to zoom




Squish Coco 3.3 Released – now with Patch Analysis

$
0
0

We are excited to announce the release of Squish Coco 3.3!

The new features of this release include Patch Analysis, speed improvements, updated reporting and more.

You can read the full announcement at http://www.froglogic.com/news-events/index.php?id=squish-coco-3.3.0.html.

We will also host a live webinar showcasing Squish Coco 3.3 on Tuesday, September 22nd. Register here.

We hope you take advantage of the new features and as always look forward to your feedback. For feedback, email squish@froglogic.com.

Squish tip of the week: Reuse script functions in BDD tests

$
0
0

With the introduction of BDD support, all the existing scripts aren’t lost – quite the contrary – existing scripts and functions still work as always, and now those functions can also be called by BDD tests.

Consider the following Test Suite Resource functions:

def invokeMenuItem(menu, item):
    activateItem(waitForObjectItem(":AB_QMenuBar", menu))
    activateItem(waitForObjectItem("{type='QMenu' \
    title='%s'}" % menu, item))
    
def addNameAndAddress(nameAndAddress):
    invokeMenuItem("Edit", "Add...")
    for fieldName, text in zip(("Forename", "Surname", \
                    "Email", "Phone"), nameAndAddress):
        type(waitForObject(":%s:_QLineEdit" % fieldName)\
             , text)
    clickButton(waitForObject(":AB-Add.OK_QPushButton"))

def checkNameAndAddress(record):    
    table = waitForObject(":Address Book_QTableWidget")
    for column in range(len(record)):
        test.compare(table.item(0, column).text(),
                     record[column])

A traditional Script Test Case would appear as follows, using the functions directly:
source(findFile("scripts", "sharedFunctions.py"))
def main():
    startApplication("addressbook")
    newEntry = ("Jane", "Smith", "jane@smith.com",\
                       "123.123.1234")
    invokeMenuItem("File", "New")
    addNameAndAddress(newEntry)
    checkNameAndAddress(newEntry)

A BDD Test Case would appear as follows:
Scenario Outline: A first scenario in which the feature can be exercised
  Given the application is running
   And a new address book is open
  When a new entry '<forename>','<lastname>','<email>','<phone>' is added
  Then the info should match '<forename>','<lastname>','<email>','<phone>'
  Examples:
      | forename  | lastname | email          | phone        |
      | Jane      | Smith    | jane@smith.com | 123.123.1234 |

The BDD steps are automated by the Test Suite Resource’s Implementation File, available for reuse to all other BDD Test Cases in the Test Suite, and can be recorded or manually created.

import __builtin__
source(findFile("scripts", "sharedFunctions.py"))

@Given("the application is running")
def step(context):
    startApplication("addressbook")

@Given("a new address book is open")
def step(context):
    invokeMenuItem("File", "New")

@When("a new entry '|word|','|word|','|any|','|any|' is added")
def step(context, forename, surname, email, phone):
    newEntry = (forename, surname, email, phone)
    addNameAndAddress(newEntry)

@Then("the info should match '|word|','|word|','|any|','|any|'")
def step(context, forename, surname, email, phone):
    newEntry = (forename, surname, email, phone)
    checkNameAndAddress(newEntry)

Why transition?

froglogic_cropped

Squish tip of the week: Determine patch regression risk

$
0
0

What if you could determine the impact of a patch on your application?

  • Do you know which tests, if any, validate the impacted code?
  • How about what code or functions were introduced and and require new tests?

Squish Coco’s Patch Analysis report can tell you that and more

Scenario:

You’ve applied a patch to your application. Now you want to know the potential impact of that test on the stability of your application.

In the prior version of your product you ran your tests (unit, automated, manual, etc.) against your instrumented application (application built with Squish Coco instrumentation enabled).

Now with the patch applied, simply run a diff on the two application versions, and using the diff result, Squish Coco’s Patch File Analysis report reveals:

  • Source code impacted by the modification
  • Which tests cover the impacted code
  • Source code impacted where no tests exist
  • New source code for which tests are needed
  • and more


Assess the risk for potential regressions – what to retest – where new tests are necessary – the impact on your release plan

Snippets from a Sample Report

Overview Statistics

click to zoom

Source Coverage Details

click to zoom

More Information



Viewing all 398 articles
Browse latest View live