Tag Archives: automated testing

Streamline Software Testing with Data Driven Automated Testing

Today’s post is written by Michael Grater, a 2005 graduate of the AIM Program and Quality Assurance lead at R/GA, a digital design firm that provides applications, design, and digital advertising for some of the largest companies. In this blog, we asked Michael to share his thoughts on his experience with the automated software testing process and provide methods on how you can improve your own testing and quality assurance.  In software testing, it is very difficult to anticipate all of the different actions that an end user might want to take. Large applications can often include a million lines of code or more and it is incumbent on the quality assurance professional to test all of the potential scenarios within the application. It is very cumbersome to test these paths one at a time, so it is common to automate the testing into a series of repeated scenarios.  This is much the same as employing a robot to test the new iPhone 6 to ensure that the customer experience is error free.  Often, a 100% success rate is impossible because of the complexity of modern applications, but the method proposed by Michael below will make the process quicker and more efficient and will uncover a higher number of errors that would have otherwise slipped into the final version of the software. 

shutterstock_162820130There are instances in the software testing process when in a shortened amount of time you either have a large number of scenarios to cover or a large number of features. These scenarios may also involve running the same tests repeatedly over a period of time. In cases like this, it becomes advantageous to use a method of test automation known as “data driven testing.”

 

As an example, in most cases, when viewing a login screen, there are essentially two fields and a button, and the steps would look like this:

  1. Open URL,
  2. Click on Login link,
  3. Enter User ID/email address,
  4. Enter password,
  5. Click on login button.

At the same time, there are multiple scenarios when it is necessary to test such features as:

  1. Entering a valid user ID and invalid password,
  2. Entering an invalid user ID and valid password,
  3. Leaving the user ID field blank but entering a password,
  4. Entering a user ID but leaving the password field blank.

Using an automation test tool, the steps can be programmed once and consist of entering a user ID, entering a password, and clicking on a button. Then the test can be configured to run based on data in a file (spreadsheet, flat file, xml file). The data is imported into the test and the execution steps leverage the data to either enter values in fields, provide URL links to be opened, or code values that are equal to features such as buttons or navigation.

The logic behind the test would look like this:

Load data file (contains URL, User ID, and password data)

If

  • Open URL (variable, parameterize),

–  Pull value from data file—insert into test,

  • Click on Login link,
  • Enter User ID (variable, parameterize),

–  Pull value from data file—insert into test,

  • Enter password (variable, parameterize),

–  Pull value from data file—insert into test,

  • Click on Login button.

View page—confirm successful login.

Is there more data?

Then, go to the next record.

Although the test consists of approximately four or five steps, it is configured to loop and execute based on the amount of data stored in the file.

The benefit of doing it this way is that the test can be reused over and over again. If a particular feature has to be retested, the automation test can be executed, generating a report of whether the tests have passed or failed. This also frees up QA staff and allows them time to focus on other areas of the project to test, maybe in places where automation is not an option.

When working with test automation, having adequate time to plan, setup, and verify that the test is working correctly is needed. Test [R1] automation can become a very efficient way to test software but it is not always a viable solution.

This concept can be applied to any automation tool. In my current position with R/GA, we’ve used this technique on multiple projects with tools such as Selenium and Jmeter.

Michael Grater

QA Lead, R/GA New York