Tag Archives: software development

Streamline Software Testing with Data Driven Automated Testing

Today’s post is written by Michael Grater, a 2005 graduate of the AIM Program and Quality Assurance lead at R/GA, a digital design firm that provides applications, design, and digital advertising for some of the largest companies. In this blog, we asked Michael to share his thoughts on his experience with the automated software testing process and provide methods on how you can improve your own testing and quality assurance.  In software testing, it is very difficult to anticipate all of the different actions that an end user might want to take. Large applications can often include a million lines of code or more and it is incumbent on the quality assurance professional to test all of the potential scenarios within the application. It is very cumbersome to test these paths one at a time, so it is common to automate the testing into a series of repeated scenarios.  This is much the same as employing a robot to test the new iPhone 6 to ensure that the customer experience is error free.  Often, a 100% success rate is impossible because of the complexity of modern applications, but the method proposed by Michael below will make the process quicker and more efficient and will uncover a higher number of errors that would have otherwise slipped into the final version of the software. 

shutterstock_162820130There are instances in the software testing process when in a shortened amount of time you either have a large number of scenarios to cover or a large number of features. These scenarios may also involve running the same tests repeatedly over a period of time. In cases like this, it becomes advantageous to use a method of test automation known as “data driven testing.”

 

As an example, in most cases, when viewing a login screen, there are essentially two fields and a button, and the steps would look like this:

  1. Open URL,
  2. Click on Login link,
  3. Enter User ID/email address,
  4. Enter password,
  5. Click on login button.

At the same time, there are multiple scenarios when it is necessary to test such features as:

  1. Entering a valid user ID and invalid password,
  2. Entering an invalid user ID and valid password,
  3. Leaving the user ID field blank but entering a password,
  4. Entering a user ID but leaving the password field blank.

Using an automation test tool, the steps can be programmed once and consist of entering a user ID, entering a password, and clicking on a button. Then the test can be configured to run based on data in a file (spreadsheet, flat file, xml file). The data is imported into the test and the execution steps leverage the data to either enter values in fields, provide URL links to be opened, or code values that are equal to features such as buttons or navigation.

The logic behind the test would look like this:

Load data file (contains URL, User ID, and password data)

If

  • Open URL (variable, parameterize),

–  Pull value from data file—insert into test,

  • Click on Login link,
  • Enter User ID (variable, parameterize),

–  Pull value from data file—insert into test,

  • Enter password (variable, parameterize),

–  Pull value from data file—insert into test,

  • Click on Login button.

View page—confirm successful login.

Is there more data?

Then, go to the next record.

Although the test consists of approximately four or five steps, it is configured to loop and execute based on the amount of data stored in the file.

The benefit of doing it this way is that the test can be reused over and over again. If a particular feature has to be retested, the automation test can be executed, generating a report of whether the tests have passed or failed. This also frees up QA staff and allows them time to focus on other areas of the project to test, maybe in places where automation is not an option.

When working with test automation, having adequate time to plan, setup, and verify that the test is working correctly is needed. Test [R1] automation can become a very efficient way to test software but it is not always a viable solution.

This concept can be applied to any automation tool. In my current position with R/GA, we’ve used this technique on multiple projects with tools such as Selenium and Jmeter.

Michael Grater

QA Lead, R/GA New York

Ready, Set, Go! The Value of Testing Before a Web Launch

IT has been in the news lately with the less-than-successful launch of the government healthcare exchange website, healthcare.gov. For some, it is an indictment of technology gone wild and for others it is clear evidence of political ineptitude. I am not used to seeing political cartoons about information professionals, but it is becoming the new norm. The reasons behind the failure are complex, but I would like to focus on one this week that I think will help all of us to avoid missteps such as this in the future. One thing that is imperative at a new launch is load testing.

Definition

Load testing is defined as executing the largest number of tasks, under test, that the system can handle. It also means understanding the behavior of the system under that maximum load. Is it just slow or is it completely unavailable? One may be acceptable and the other unacceptable. Load testing is most successful when the maximum number of users or concurrent processes is known in advance. In the case of the healthcare exchange, I believe that there is enough data to predict how many people would try to access the site in any given period.

All the Way Down the Line

It all sounds so simple but it can get very complicated. Not only do you need to test the potential load on the website and the web application, but you also need to test the potential load on the web server, the database serving the information, the database server, and the network tying it all together. Weak performance in any of these can cause the kind of problems seen with healthcare.gov. This is where an IT troubleshooter is worth their weight in gold. Someone who understands the interoperability between all of these systems and processes can root out potential problems before the application goes live.

Common Sense

When to launch a new application or website is also partly common sense. Computer testing can only go so far. If, what is reported in the Washington Post is correct, not only did the load test fail, the common sense test failed as well:

“Days before the launch of President Obama’s online health insurance marketplace, government officials and contractors tested a key part of the Web site to see whether it could handle tens of thousands of consumers at the same time. It crashed after a simulation in which just a few hundred people tried to log on simultaneously. Despite the failed test, federal health officials plowed ahead.”

In cases such as this I think of the immortal words of Walt Kelly: “We have met the enemy, and he is us.”

Thoughts

It is possible to correctly predict the performance of an application and have a wildly successful launch. Do you have stories of successes, large or small? Do you have stories of failures that you would just as soon forget but provided great lessons to you and others? I encourage you to share your story so that we can all learn. Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional, adjunct faculty for the University of Oregon, and academic director of the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.