main-qimg-c5bb48d99df22f4e41a44aabe8314f8c

AI driven Testing

As a part of digital transformation goals, companies are actively exploring how AI can help them. With the rise of DevOps and Continuous Delivery, the business is now looking for real-time risk assessment throughout the various stages of the software delivery cycle. AI is undeniably valuable—and necessary—for transforming testing to meet these new expectations. Nevertheless, it’s important to realize that not every AI-driven software testing technology is the panacea it’s cracked up to be. While some are poised to deliver distinct business benefits in the clear and present future, others don’t seem ready to live up to the hype.

 

The surface area for testing software has never been so broad. Applications today interact with other applications through APIs, they leverage legacy systems, and they grow in complexity from one day to the next in a nonlinear fashion. What does that mean for testers?

 

The 2016-17 World Quality Report suggests that AI will help. “We believe that the most important solution to overcome increasing QA and Testing Challenges will be the emerging introduction of machine-based intelligence,” the report states.

 

Big changes coming:

 

How will we as testers leverage AI to verify these ever-growing code suites? And what will happen as AI works its way into our production applications? How will testing change?

 

We know that the next generation of testers: They will soon “laugh at the notion of selecting, managing, and driving systems under test (SUT)—AI will do it faster, better, and cheaper.”

 

Moshe Milman and Adam Carmi, co-founders of Applitools, which makes an application meant to “enhance tests with AI-powered visual verifications,” say there will be “a range of possible outcomes. A test engineer would need to run a test many times and make sure that statistically, the conclusion is correct. The test infrastructure would need to support learning, expected test results from the same data that trains the decision-making AI.”

 

AI’s interactions with the system multiply result you’d have with manual testing. An AI-based testing program called ReTest. Currently, in beta, ReTest offers the luxury of generating test cases for Java Swing applications.

 

If generating test cases isn’t enough to commit then Infosys now has an offering for “artificial intelligence-led quality assurance.” The idea is that the Infosys system uses data in your existing QA systems (defects, resolutions, source code repo, test cases, logging, etc.) to help identify problem areas in the product.

 

Citing the same vision toward AI-as-testing-assistant projected by Rößler and Infosys, Milman and Carmi claim, “First, we'll see a trend where humans will have less and less mechanical dirty work to do with implementing, executing, and analyzing test results, but they will be still an integral and necessary part of the test process to approve and act on the findings. This can already be seen today in AI-based testing products like Applitools Eyes.”

 

When AI can make less work for a tester and help identify where to test, we’ll have to consider BFF status.

 

What happens when both testing applications and systems under test use AI?

Automation may know how to interact with the system, but it is missing “a procedure that distinguishes between the correct and incorrect behaviors of the SUT.”

 

In other words, how would an AI that tests know that the system under test is correct?

 

Humans do this by finding a source of truth—a product owner, a stakeholder, a customer. But what would the source of truth for the testing AI be?