A Blog from QualiTest

How will AI help us to create better software? (Or, how we can leverage AI to create better User Experiences)

AI can help you with your software focus in SDLC

AI is in essence a set of non-deterministic mathematical algorithms that are able to make obvious sense out of big sets of data.  AI uses different learning (i.e. deep learning) and adjustment techniques (such as context drift).  Instead of discussing those here, let’s cover some topics that demonstrate how AI will be leveraged for User Experience optimization.

 

AI reporting BI

There is a healthy supply of capable BI tools and reporting dashboards out there. Setting up the latter with the right metrics is crucial for a successful reporting mechanism to enable governance and monitoring. AI will be built on top of all that, as AI feeds on data and therefore can only be as good as the input data.

AI models will be able to determine multiple correlations between productivity and quality (as an example) between individuals and throughput and will be able to recommend mitigations to bottlenecks, quality and Ux delivery challenges. In this way, non-human intelligence can be used to identify a variety of concerns and perhaps suggest fixes.

What’s in it for me? If I am a Program Manager, I can now extract meaning from past data and accumulated knowledge which can be applied to my current delivery challenges.

 

Same software, different tests

A system that leverages AI-augmented automated test case generation and smarter data handling have test coverage based on user actions coverage.  While we will still functionally test all branches, the test generation will be ad-hoc, reducing the System-Under-Test (SUT) resilience to given test cases. This effectively moves the focus of Business Assurance (BA) from functional testing to business processes assurance.

Tool vendors are now changing their positioning.  They are now looking at how to enable innovative ways of working, allowing business analysts, developers and test engineers to benefit from the useful capabilities of AI. Eggplant AI, for instance, helps teams intelligently navigate apps, heatmap probable locations for quality issues in the code, and correlate data for quick issue ID and resolution through analytics, machine learning and enhanced AI.

What’s in it for me? If I am a test manager, I can now focus my teams on systems being meaningful and not just functionally correct, shifting their focus from technical test tasks to business assurance.

 

Special forces over Brute force

Unit testing is key to establishing and assuring software quality, but treated as a time consuming necessary evil. Testing tools that automatically generate unit tests are here already. In the last year we have witnessed the emergence of AI tools that are able to produce similar coverage, utilizing optimization of coverage algorithms and machine learning. Oxford’s diffblue is one such effort, with a goal of automating all traditional coding tasks: bug fixing, test writing, refactoring and improving code, language translation, and creating new code from specs.

What’s in it for me? If I am a R&D manager, I can now accelerate my team’s code delivery, with confidence that the code is correct and up to standards.

 

How can we tell if it does what it says on the tin?

AI implementations are here, from shopping recommendations (shoppers who bought that also bought …), semi- and fully-autonomous cars, voice helpers, image recognition augmentations, and others. As we understand that AI is no longer deterministic and the result of a test might be different from on run to the other, how can we make sure that the AI delivers the right results?

Well, we no longer say right or wrong, true or false, and we have instead adopted a System IQ model, or “how smart is AI”. There are some interesting articles about how to test for System IQ, and the main concepts are changing of skills, method and technologies. We will use “Experiments” over “Testing”, we will assign “Data Scientists in Test” over “S/W Engineers in Test” and we will use statistics and trends monitoring rather than end of test cycle reports.

What’s in it for me? If I am a CEO, I will know how to measure the value added of our AI implementation, financially and technically.

 

Where to start? What to focus on?

A common way of working in development teams involves reliance on “ninjas”. Let’s face it, we love them – experts who know the system better than anyone else, and who we believe know exactly where to start the testing and where the problems should be. I say no more ninjas, or at least, no ninja’s guessing.

 

Failure Prediction is a very innovative micro stream in the AI trend.

We see tools like Deepcoding.AI that focus on making test management more scientific. Utilizing the Socratic method, they enable decision making based on questions and data, enabling prioritization of efforts and identification of needed changes to process and assignments.

What’s in it for me? If I am a CTO, I will more quickly find out the quality of the latest release to be able to decide on go-no go sooner, and will be able to focus my development managers on probable failures that the AI provided.

 

What can we conclude from all of this?

AI continues to grow regarding its influence in the software development arena.  New products are coming out that can be used as AI-based solutions, and those abilities will only continue to increase over time.  Testers should realize that the latest arsenal of tools allows us to do our jobs easier, smarter and better, but has not (at least yet) replaced us.