Quantcast
Channel: froglogic
Viewing all 398 articles
Browse latest View live

Squish tip of the week: Creating an automated test from a manual test

$
0
0

As you begin building your Automated GUI Testing Framework, the first set of tests you’re likely to automate are those manual test cases which while required are perhaps mundane, repetitive or require meticulous detail review.

Use existing manual test cases as your base

Sample Manual Test Case: Create New Address Book
  1. Open Application
  2. Create a new address book using application menus
  3. Expected result: Confirm new addressbook created without any entries

Each step in the test case becomes a comment in the script
# Create New Address Book
# Open Application
# Create new address book
# Confirm new addressbook created without any entries

Add the main() function applicable to the scripting language you’re using
# Create New Address Book

def main():
# Open Application

# Create new address book

# Confirm new addressbook created without any entries

Automate the test case in small sections, or snippets, using the manual test case steps as your guide
  1. Place the cursor on the empty line after the #Open Application comment
  2. Right-click and select Record Snippet
  3. Once your application launches, click Stop

Your script should now appear similar to the following
# Create New Address Book

def main():
# Open Application
    startApplication("addressbook")

# Create new address book

# Confirm new addressbook created without any entries

Before recording the next Snippet, launch the application using Launch AUT

We’re doing this for two reasons:

  1. Since the startApplication step was already recorded, we do not want to duplicate that action, thus creating a duplicate step
  2. Using Launch AUT, instead of relying on Squish to open the AUT upon clicking Record, ensures the AUT remains open after we stop and start multiple Snippet recordings

Recording additional Snippets
  1. Click Launch AUT
  2. Place the cursor below the # Create new address book line, right-click and select Record Snippet
  3. Once the recording begins, select File > New in the Addressbook application
  4. Click Stop

Last, we’ll use Record Snippet to record the final portion of the script, confirming the new Addressbook was created without any entries
  1. Place the curor below the # Confirm new addressbook created without any entries line, right-click and select Record Snippet
  2. Once the recording begins, select Insert Object Properties Verification Point
  3. When the Squish IDE appears, select Scriptified Properties VP
  4. Next click the Pick tool in the Application Objects toolbar
  5. Once the Addressbook application reappears, click the empty table
  6. The properties of the picked object display in the Squish IDE’s Properties window
  7. Check the rowCount property and click Insert
  8. When the Control bar returns, click Stop
    1. The script should appear similar to the following
      # Create New Address Book
      
      def main():
      # Open Application
          startApplication("addressbook")
      
      # Create new address book
          activateItem(waitForObjectItem
                       (":Address Book_QMenuBar", "File"))
          activateItem(waitForObjectItem
                       (":Address Book.File_QMenu", "New"))
      
      # Confirm new addressbook created without any entries
          waitFor("object.exists(':Address Book - Unnamed." +
                  "File_QTableWidget')", 20000)
          test.compare(findObject(":Address Book - Unnamed." + 
                                  "File_QTableWidget")
                       .rowCount, 0)
      
      
      
      

      Read more about recording snippets and other related topics below

Click here to request your free 30 day Squish evaluation


Squish tip of the week: Update a changed object in the Object Map

$
0
0

Changed Object?   No Problem!

Squish can help you update objects in your Object Map

Here’s How:
  1. Locate the item to update in the Object Map
    1. Manually navigate the Object Map for object
    2. Right-click object’s Symbolic Name in script and select Open Symbolic Name

  2. Start your application using Squish
    1. Click Launch AUT and navigate to the area in your application where the object to update exists
    2. OR set and run to a breakpoint in your script where the object to update exists

  3. Pick the object in the AUT and copy the object’s Real Name
    1. Click the Pick tool in the Application Objects window
    2. When the AUT appears, click the object of interest
    3. When the Application Objects window returns, right-click the highlighted object and select Copy Real Name

  4. Update the Real Name
    1. Return to the Object Map
    2. With the object of interest still highlighted, click Replace Real Name
Learn more:

Squish tip of the week: How to know what code your testing actually exercises

$
0
0

Did you know Squish GUI Tester and Squish Coco can work together to reveal more about your testing?

Integrate your testing with Squish Coco to answer these questions:
  • How many of our tests are redundant?
  • What areas of our application are the tests not reaching?
  • If I only had a given amount of time to execute as many tests as possible, which tests would you choose?
  • …?

What will this simple integration reveal about your testing coverage?

Read Measuring code coverage using Squish Coco & Squish GUI Tester to get started!


Testing Qt and QML applications @ Qt Developer Days

$
0
0

We are all excited that the prime Qt event of the year, the Qt Developer Days, is approaching and only 3 weeks away!

We at froglogic would like to thank KDAB, ICS and Digia for organizing this again. It will be great to meet many known faces and new Qt enthusiasts.

We will present the latest and greatest version of our GUI Test Automation tool ‘Squish’ and Code Coverage Analysis Tool ‘Squish Coco’.

At our booth you will be able to see Squish testing Qt and QML/QtQuick applications on various desktop platforms, embedded platforms and mobile platforms.

For those who want to get an in-depth introduction to Squish for automated Qt and QML testing, one of our senior training instructors will conduct the Introduction to Testing Qt applications with Squish training class on Monday, October 6th.

On Tuesday, October 7th, we will showcase Squish right after lunch.

On Wednesday, October 8th, I will give a talk about Behavior-Driven Development and Testing of Qt C++ and QML Applications where I will present how the BDD methodology can be applied to Qt/QML and specifically GUI tests.

We are looking forward to meet you all at the Qt Developer Days :-)

Squish tip of the week: How to test Web Apps on Mobile Devices

$
0
0

Did you know that you can test web applications on mobile device or tablet browsers as well as desktop browsers?

With the Squish for Web edition installed on a desktop machine:
  1. Configure Squish to use standalone proxy server listening on a port (let’s say 8001) by executing the following command from your Squish install directory:
    $ ./bin/squishserver --config setProxyConnectAddress localhost:8001
  2. Start an HTTP-Proxy server (using a different port number, let’s say 8044) on the same computer by executing the following command from your Squish install directory:
    $ ./bin/webproxy/proxy -H PC_NAME_OR_IP -p 8044 localhost 8001
  3. Connect your mobile device or tablet to the same network as your desktop computer
  4. Open the device’s Wi-Fi settings and edit your currently connected Wi-Fi network settings (iOS – click the i for more info, and in the HTTP PROXY section click Manual; Android – tap and hold the currently connected Wi-Fi network, click Modify network and check the Show advanced options check box)
  5. Enter your desktop computer’s IP address or name in the Proxy hostname or Server box, and enter the HTTP-Proxy port (in this example 8044) in the Proxy Port or Port box
  6. Save and close the settings area on your device
  7. To test your connection, open a browser on your device and navigate to http://www.froglogic.com/startsquish/ (or any link followed by /startsquish). The browser page should load a Squish/Web Automated GuiTesting page with a Waiting for start of next testcase… status
  8. Open the Squish (for Web) IDE and select Edit > Server Settings > Browser, and choose the Browser on Mobile Device option.
Squish is now configured to test Web applications on your device!

Learn how to integrate Squish GUI Tester and Jenkins

Squish tip of the week: How to Write Test Data to a File in 3 Simple Steps

$
0
0

Test results have value beyond basic reports; Why not share the data?

Write test-related information (or pretty much anything) to an external file:

Write to a file in 3 simple steps
function main(){
   var file = File.open("C:\\path\\to\\file.txt", "a");
   file.write("Pass:" + String(test.resultCount("passes"))
            + "Fail:" + String(test.resultCount("fails")));
   file.close();
}


Example writing to another application’s measurement file

The example below demonstrates how to write information from a test to a Squish Coco report file, which Squish Coco then imports as part of its coverage measurements.

Not only does the user have data from the automated GUI test, but Squish Coco also reveals what code the test covered and related analysis about the test (assumes Squish Coco installed and configured in advance)

tst_sampleTest.js
source(findFile("scripts","squishCocoLogging.js"))

function main(){    
    startApplication("addressbook");
    execution = getExecutionPath();
    logTestNameToCocoReport(squishinfo.testCase, 
                            execution);
    
    try{
       // body of script
       }
       catch(e){
           test.fail('An unexpected error occurred',
                      e.message)
       }
       finally{
           logTestResultsToCocoReport(test, execution)
       }
}


squishCocoLogging.js: does all the external file logging work:
function getExecutionPath(){
    var currentAUT = currentApplicationContext();
    var execution = currentAUT.cwd + "\\" + 
                    currentAUT.name + ".exe.csexe"
    return execution;
}

function logTestNameToCocoReport(currTestCase, execution){
    var testExecutionName = 
        currTestCase.substr(currTestCase
        .lastIndexOf('\\') + 1);
    
    var file = File.open(execution, "a");
    file.write("*" + testExecutionName + "\n");    
    file.close();
}

function logTestResultsToCocoReport(testInfo, execution){
    
    var currentAUT = currentApplicationContext();
    
    // wait until AUT shuts down
    while (currentAUT.isRunning)
      snooze(5);    

    // collect test result summary and status
    var positive = testInfo.resultCount("passes");
    var negative = testInfo.resultCount("fails")
                     + testInfo.resultCount("errors")
                     + testInfo.resultCount("fatals");
    
    var msg = "TEST RESULTS - Passed: " + positive + " | "
               + "Failed/Errored/Fatal: " + negative;
    
    var status = negative == 0 ? "PASSED" : "FAILED";
    
    // output results & status to Coco report file 
    var file = File.open(execution, "a");
    file.write("<html><body>" + msg + "</body></html>\n");
    file.write("!" + status + "\n")
    file.close();  
}

Read more in the KB article and other Squish resources below:

Squish tip of the week: How to produce more accurate test results

$
0
0

To produce accurate and meaningful test results, each test should be able to run alone and as part of a test suite.

What if my test requires my application be in a specific state to run?

The key of your test is to determine if a particular requirement is met or result is produced. The path to setting up the scenario in the application where the test can be performed should not dictate if the test itself passed or failed.

How can I separate the two?

Incorporating setup and teardown steps outside the main test ensures test results indicate the status of the actual requirement or feature targeted.

Use Squish’s init() and cleanup() functions in addition to the main() function to break the test apart:

def init():
    try:
        # setup steps to execute prior to main test
        test.log("trying some aut setup")
    except Exception, e:
        test.fatal("Test setup failed. Main will not" +
                  " run.", "See 'Script Error' details.")
        raise Exception(e)

def main():    
    # main executes only if init() finished without issue
    startApplication("AddressBookSwing.jar")
    activateItem(waitForObjectItem(":Address Book_javax"
                            + ".swing.JMenuBar", "File"))
    test.log("does it get here?")
    
def cleanup():
    # cleanup steps to execute after to main test,
    # even if main fails or never executes    
    test.log("cleaned up from test case")

Sample Report

Summary

Test Cases 1
Tests 0
Passes 0
Fails 0
Errors 1
Fatals 1

Results

tst_general Thu Oct  2 13:34:49 2014
FATAL Test setup failed; main() will not execute. See ‘Script Error’ for details.
ERROR Script Error Exception: Item ‘Filessss’ in object ‘:Address Book_javax.swing.JMenuBar’ not found or not ready.
LOG Post test case cleanup complete

Ad-hoc testing or test randomization has also proven beneficial

Read more about test randomization here: Squish tip of the week: Alter test scenario workflow to increase test effectiveness

Read more in other Squish resources below:

Measuring QML Coverage

$
0
0

Last year we started receiving the first requests for QML coverage. “Sure. We’ll look into it.”, we replied. It seemed like a logical extension of our cross-language coverage tool Squish Coco. At least on first sight.

At this year’s Qt Contributors’ Summit the question came up independently in one of the sessions. I had nothing to show back then. But now, there’s finally a prototype accomplishing a proof of concept. To be seen live in action at froglogic’s Qt Developer Days 2014 booth.

coco_coveragebrowser

With QML finding its way into “real” applications it’s a natural move for quality-aware developers to track its test coverage. A must for safety-critical software. Would it be much different from the current set of supported languages like C, C++, C# or the scripting language Tcl? When looking closer one quickly realizes two fundamental differences:

  • Besides some embedded JavaScript code QML is most about the UI. In fact, there can be zero JavaScript. So no if() statements or for() loops leading to execution flow branching. Just buttons, dials, menus, etc.
  • QML is a declarative language. Leaving the optional JavaScript aside nothing gets executed in the classic sense. By definition there is no control flow.

After some brainstorming and mind-bending sessions we ended up going for a cross-over solution: classic code coverage gets intertwined with what I’ll call “GUI coverage”. The latter measures test coverage by the degree to which control elements are being excercised. For a button this would be a click for example. Same for a menu item. Whether alternate keyboard usage is required as well can be configured.

To demonstrate the output of such an approach we grabbed the sample Unit Converter application Reggie is going to use during Tuesday’s session on Behavior-Driven Development (BDD):

unitconverter

After the GUI gets excercised through a dilligent human tester or a automated testing tool like Squish coverage data will be emitted. Analysis can be done using an interactive tool (see CoverageBrowser screenshot at the top) or an HTML report. The project leader might just care about the overall number. But a developer may want to study details. Was each ‘case’ of a ‘switch’ statement hit? Did each choice of a radio button group get used at least once? Here’s an overview showing percentages for control elements as well as JavaScript code:

Coverage Percent

Analog to coloring pieces of code the usage of graphical elements is visualized: the “Convert” button was clicked (green). The unit selectors remained unused (red). As a result the whole dialog was only partially tested (orange).

guicolored

Application components implemented in C++ and using QWidgets would be included in above metrics, too. For a live demo visit our booth in Berlin between October 6-8. Others are invited to provide feedback and ask questions via the comment field.

Yet another static code analyzer run

$
0
0

Looking for the answer to a 64-bit build question I ran into a news item titled “The Unicorn Getting Interested in KDE“. Since I never saw an unicorn before this made me curious.

Turns out that a company selling a static code analysis tool has been analysing KDE code. This is not the first time some provides such feedback to Open Source projects. Did this

My favourite finding is this redundant if() statement:

if ( type == "String" ) t += defaultValue; //<==
else t+= defaultValue; //<==

Can anyone tell how old the KDE code base is? And did they approach anyone from the project, yet? The posting is just two days old but it might already be old news in todays age...

Squish tip of the week: Enable Verbose Test Result Logging

$
0
0

Need more detailed information in your test results?

Nightly or scheduled test runs results often provide valuable quick-read information.

What about times when verbose logging, or a Test Audit Log, may prove valuable?

The following example illustrates how to create a fully-customizable Test Audit Log using Squish. Each action is modified to include a log message and description when executed. Simply calling the enableVerboseLogging() function from the main() test case activates verbose logging.

Functionality available for all Squish-supported scripting languages and application toolkits. Example in Python using Java Swing application.

source(findFile("scripts","logScriptActions.py"))

def main():
    startApplication("AddressBookSwing.jar")
    enableVerboseLogging()
    ...
# activate item
def alterActivateItem(activateItemFunction):      
    def wrappedFunction(menuObject, 
                       logText="activateItem() called"):
       test.log(logText, 'Activated item %s' % objectMap.
                symbolicName(menuObject))
       activateItemFunction(menuObject)
    return wrappedFunction

# click button   
def alterClickButton(clickButtonFunction):       
    def wrappedFunction(button, logText="clickButton()"
                        + " called"):    
        test.log(logText, 'Clicked %s' % objectMap.
                 symbolicName(button))
        clickButtonFunction(button)
    return wrappedFunction
    
# mouse click 
def alterMouseClick(mouseClickFunction):   
    def wrappedFunction(objectToClick, posX=None, 
                   posY=None, buttonClicks=None, 
                   buttonState=None, buttonPressed=None,
                   logText="mouseClick() called"): 
        test.log(logText,'Mouse clicked %s' % objectMap.
                 symbolicName(objectToClick))
        mouseClickFunction(objectToClick)
    return wrappedFunction

# type
def alterTypeFunction(typeFunction):
    def wrappedFunction(objectToTypeIn, stringInput,
                        logText="type() called"):
        test.log(logText, 'Typed %(text)s in %(field)s'
                 % {"text":stringInput, 
                    "field":str(objectMap.
                        symbolicName(objectToTypeIn))})
        typeFunction(objectToTypeIn, stringInput)
    return wrappedFunction

# call Squish function modifications
def enableVerboseLogging():
    test.log("Verbose logging enabled")
    
    global activateItem
    activateItem = alterActivateItem(activateItem)
    
    global clickButton
    clickButton = alterClickButton(clickButton)
    
    global mouseClick
    mouseClick = alterMouseClick(mouseClick)
    
    global type
    type = alterTypeFunction(type)


Download suite_AuditLogReport_py example

Squish tip of the week: How to slow script playback down

$
0
0

At times, having a script playback slower may be helpful.

The sample script below snoozes for 0.5 seconds between each step in main():

import sys
  
def for_each_call(frame, event, arg):
    snooze(0.5)
  
def init():
    sys.setprofile(for_each_call)
  
def main():
    ...

Read the following knowledgebase article to learn additional approaches:
Article – Slowing Down Test Script Execution

Squish tip of the week: Import collection of test cases to HP’s QC / ALM

$
0
0

Did you know teams can execute, manage and analyze Squish tests and results using Squish’s QC ALM Add-on?.

Using the example below, you can also import a collection of tests to QC ALM in a single call!

Import Call and Parameters
testSuitePath = "C:\\Path\\To\\Test\\Suite\\"
objectmap = "C:\\Path\\To\\objects.map"
tcList = ["tst_contact","tst_fail"]

importListOfTestCases.py --testSuitePath %testSuitePath%
               --objectmap %objectmap% --tcList %tcList%


Script: Import List Of Test Cases
#!/usr/bin/python
# importListOfTestCases.py

import os
import subprocess
from subprocess import Popen

def importListOfTCs(testSuitePath,objectmap,tcList,qciPath, server, domain, project, user, password, path):    
    m = []
    for testCase in range(len(tcList)):
        importTC = "%(qciPath)s --testcase %(testCasePath)s --objectmap %(objectmap)s --server %(server)s --attach-shared-folder --domain %(domain)s --project %(project)s --user %(user)s --password %(password)s --path %(path)s --replace" % {"qciPath":qciPath,"testCasePath":testSuitePath + "\\" + tcList[testCase],"objectmap":objectmap,"server":server,"domain":domain,"project":project,"user":user,"password":password,"path":path}
        proc = Popen(importTC, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        err, out = proc.communicate()
        exitcode = proc.returncode
        if err:
            m.append("Error: %s" % err)
            
        if exitcode is not 0:
            m.append("Unexpected exit code: %s" % exitcode)
        
        if out:
            m.append("Test case imported: %s" % out)
    msg = "\n".join(m)
    return "\r\n----------------\r\nImport results\r\n----------------\r\n%s" % msg

# ** Use hard-code parameters below, or comment out and provide using --<parameterName> <value> when calling script
# testSuitePath = "C:\\Path\\To\\Test\\Suite\\"
# objectmap = "C:\\Path\\To\\objects.map"
# tcList = ["tst_contact","tst_fail"]
qciPath = "C:\\Path\\To\\qcimporter.exe"
server = "<server>"
domain = "<domain>"
project = "<projectName>"
user = "<user>"
password = "<password>"
path = "\"/PathForTestsInQC\""
    
conf = importListOfTCs(testSuitePath,objectmap,tcList,qciPath, server, domain, project, user, password, path)
print conf

Read more in other Squish resources below:

Squish tip of the week: How to determine if a checkbox is checked?

$
0
0

You can check the state of a checkbox, radio button, or the property of any other object or widget using Squish.

Using a test.compare() Verification Point

The line below looks at the object :Controls.My_CheckBox and indicates if the verification point passed or failed: passed if the checkbox is checked, and failed if the checkbox isn’t checked:

test.compare(findObject(":Controls.My_CheckBox").checked, True)

Using <object>.<property>

You can also save an object’s properties to a variable, and access the properties in your script. The example below saves the object’s properties to checkBoxToVerify, after which it logs a message indicating if the checkbox is checked or unchecked:

checkBoxToVerify = waitForObject(":Controls.My_CheckBox")
if checkBoxToVerify.checked:
    test.log("Checkbox already checked")
else:
    test.log("Checkbox not checked")

Squish tip of the week: Use data to validate endless scenarios

$
0
0

You’ve most likely heard of data-driven tests. A key term often used in Automated GUI Testing.

But did you realize the power a set of data can provide within a single test case?

Take a simple example of enrolling a user in a membership-based program. Within the few registration fields, many variables exist:

  • What are all valid characters in each given field?
  • What are the invalid characters in each given field?
  • Minimum number of characters
  • Maximum number of characters
  • Must the field contain specific characters, such as an @ symbol?
  • Is each given field required? Or under what circumstances is each given field required?
  • etc.

Using a single iterating test case, a set of input data, as well as the anticipated result, you can validate a large set of business rules, or features/requirements.
Now imagine validating each of these scenarios manually…

Think of an area in your application where your testing could greatly benefit from such a scenario. Incorporate the test, among others, in your build process, and measure the testing duration savings!


Squish tip of the week: How to execute keyboard and mouse combination commands

$
0
0

How can I perform a keyboard + mouse combination in a test script?

The mouseClick function accepts a modifierState and a button. The modifier state accepts a set of keyboard commands.
Take for example performing a Control + Shift + Right Click

Given a Java Swing application, Shift is 1 and Control is 2
Which results in mouseClick(“:myObject”, 5, 5, 1|2, Button.Button3), where the Shift + Control keys are pressed, followed by Button3, or a right click.

Each application type may use different modifier values:

Java Applications

New video tutorial: Finding child objects using parent

Squish tip of the week: Handling mouse position sensitive drag and drops

$
0
0

Mouse cursor position can impact some drag and drop objects in applications.

In a recent application I was working with, drag and dropping objects to new locations worked with the build-in drag and drop functionality; however connecting those objects with a workflow style arrow, required the mouse cursor to first hover over a specific quadrant of the object before the line drawing could take place using a drag and drop command.

To address this, I created a slightly altered drag and drop function called dragAndDropConnection. While the example below applies to a Java application, the same theory, or approach is applicable to other application toolkits.

function dragAndDropConnection(canvasObj, sx, sy, dx, dy){
    mouseMove(waitForObject(canvasObj), sx, sy);
    snooze(2);
    dragAndDrop(waitForObject(canvasObj), sx, sy, 
        canvasObj, dx, dy, DnD.DropNone);
}

The mouseMove line is the key to this case. The snooze(2), while not required, helped me to witness the breakdown of the dragAndDropConnection function.

Keep in mind, the coordinates are relative coordinates, relative to the object on which the action in performed, in this case, the canvas.

To further optimize connecting the drag and dropped objects, one could calculate the relative location from which to draw the connection based on the original drag and drop location of the two objects on the canvas.

The arrow you wish to move the cursor to resides on the left edge of the object, and about half-way between the top and the center (vertically). Given that you can use:

mouseMove(waitForObject(canvasObj), sourceElementX * 0.9,
    sourceElementY * 0.8);


Allowing you to calculate drawing the connection between the two objects based on their dropped location as follows:

function dragAndDropConnection(canvasObj, sourceElementX,
    sourceElementY, destinationElementX,
    destinationElementY) {
    sx = sourceElementX * 0.9;
    sy = sourceElementY * 0.8;
    mouseMove(waitForObject(canvasObj), 
        sx, sy);
    snooze(2);
    dragAndDrop(waitForObject(canvasObj), sx, sy, 
        canvasObj, destinationElementX, 
        destinationElementY, DnD.DropNone);
}

Squish tip of the week: How to enable Code Completion for Shared Scripts (python)

$
0
0

Did you know by simply enabling a few Eclipse PyDev settings, and including an import statement in your Test Case the code completion also displays any functions defined in your Test Suite Resource scripts?

Adjust Code Completion Settings

  1. With a Squish Test Suite open, select Edit (or Squish IDE) > Preferences
  2. Expand PyDev > Editor and then select Code Completion
  3. Check the three Request Completion on… options
  4. Click OK

Tell your Test Suite where to find the Test Suite Resource scripts

  1. Click Window > Show View… > Other
  2. Expand PyDev and select PyDev Package Explorer
  3. From the PyDev Package Explorer, right click the currently active Test Suite and select Properties
  4. Select PyDev – PYTHONPATH
  5. Click Add Source Folder
  6. Expand the Test Suite > Test Suite Name > shared, and select scripts (if you do not have any Test Suite Resource Scripts this folder may not exist)
  7. Click Force restore internal info
  8. Click OK, and click OK again on the Properties window

Now you are ready to begin using the code completion in your Test Suite!

Given a Test Suite Resource script script_1.py, create a new or open an existing Test Case and add the following:

if 0: from script_1.py import *
source(findFile("scripts","script_1.py"))

def main():

With a function within your script file in mind, begin typing the first few characters of the function name as illustrated below:

Result?

The code completion is now enabled for general scripting, and Squish is also now aware of any custom functions you create and indicate using the syntax above.


Checkout our video library later this week for a video demonstration working with code completion.

Additional Resources

Squish tip of the week: Test against multiple iOS devices simultaneously

$
0
0

Run your Squish for iOS test suite against multiple devices at the same time from a single computer

  • Map the Devices and Apps
  • Modify script to get Mapped AUT from environment variable
  • Execute squishrunner once for each mapped AUT to test


Map the Devices to use: Add Attachable AUT
From IDE
  1. Select Edit > Server Settings > Manage AUTs
  2. Select Attachable AUTs and click Add…
  3. Enter a name for one of the device and app combinations in the Name box
  4. Enter the IP Address of the device in the Host box
  5. Enter the port on which the app is listening in the Port box
  6. Click Add
  7. Repeat steps 2 through 6 for each additional device

OR

From Command line

From Squish’s bin directory execute the following for each device and app combination:
squishserver --config addAttachableAUT <DeviceAppName> <IPAddress>:<Port>

Modify script to get Mapped AUT from environment variable
var device = OS.getenv("TEST_DEVICE");
attachToApplication(device);


Execute squishrunner once for each mapped AUT to test

With the App launched on each device, execute the following from Squish’s bin directory:
TEST_DEVICE=<attachableAUT1> ./squishrunner --testsuite

Each time the command executes, the tests run on the specified device, allowing tests to run in parallel.

Related reading

Automated Batch Testing
How to run simultaneous tests against multiple devices from a single computer
Request your free 30 day Squish evaluation
Learn more about Squish
Other Squish Resources (videos, tech articles, user communities and more)

Viewing all 398 articles
Browse latest View live