The QA Financial conference, titled “AI and Automation for Mobile Banking and Ecommerce” took place in London recently and it was a great opportunity for mobile app development and quality assurance professionals to hear all about the latest developments, challenges and solutions.
One of the things that emerged from attending the event is that it was a great chance to get a better understanding of what and how much you don’t yet know, and this is especially true when it comes to AI.
Here are 6 significant takeaways from that conference.
Artificial intelligence, its role in quality assurance and where it can help your business was a major focus of the conference. One important insight that was discussed was the fact that AI is not just about artificial intelligence, but also about human intelligence. Since AI is only as intelligent as the data it has, and the data is only as good as you can provide it, human intelligence might be a limiting factor for developing and testing AI.
Some of the speakers talked about ways to test using AI. There are a couple of toolkits coming out soon that leverage AI for test case design, but there’s still a lot of manual input required on those. Current AIs for testing are definitely coming, but the ones that are used now are still in the early stages of the technology. So while they can be used on your projects, they still require work to be set up properly.
While we know that AI will play a major part in testing, the general notion is that we still don’t know how soon that will happen. The main issue right now is whether the solutions that AI currently provide outweigh the effort to make it work properly.
In addition, there’s a critical need for the right human intelligence to be a part of the AI design process, requiring emotional intelligence and an understanding of the ethics and fairness around its development and delivery. The need for quality data for good AI is clear, but it needs to be morally and ethically interpreted as well for the AI to be truly successful.
A challenge for many people is figuring out what return on investment they are getting, not just from using AI, but also from testing and automation, and how to assess the quality of the delivery.
One of the speakers is trying to use AI that is designed to help identify parts of an application and work out where it can test. He was concerned with how much effort the AI needed to get running versus the perceived time and cost saving it potentially offered. Currently the AI needed more hand holding than he would like to indicate to it if the element in question can or cannot be interacted with. In other words, it wasn’t as intelligent as he would have hoped in picking up the moving parts of the process.
Another concern for those who attended the event was the need not only to be able to deliver faster and smarter without compromising quality, but also to have the ability to measure the quality of delivery on an ongoing basis in real-time.
This is something that the Qualitest dashboard and TestPredictor can help demonstrate. These solutions don’t replace testers, but they do inform them. If you can see how positive or negative your ongoing investment in testing and automation is, it becomes much easier to refine it and to understand how it contributes to the business. These solutions serve as a lens that enables an organisation to focus on making informed quality engineering and continuous improvement decisions.
There’s still a lot of uncertainty and lack of a best practice when it comes to AI solutions and how they should be tested. There are several reasons for this, including the black-box nature of an AI solution, the sheer size of the input combinations., as well as its non-deterministic nature.
Because AI is inherently different from rule-based software that came before, testing it is something few people have tackled to date and have figured out the best way to perform those tests.
Testing of AI is a completely different way of testing, since there’s no pass/fail result. A huge amount of data needs to be validated with AI specific testing approaches to make sure that there aren’t strange biases in the data prior to building your AI engine.
Once the engine is built, all you can do is make sure it produces logical results. This is why there is a need for a data scientist in test, as they will need to understand the massive data that the AI engine is processing, to make sure it processes it properly.
Crowd testing is big now. It allows companies to carry out more flexible testing, using real users and devices in locations that otherwise would be complicated and costly to engage. It offers a bigger pool of testers, which makes for more authentic testing.
Crowd testing also allows to choose the target audience, the devices to use under real-time conditions. This is one of Qualitest’s solutions and people at the conference were looking into how they can tap into the pool of worldwide “organic” testers.
Optimal user experience is critical! It dictates not only how users interact with your app, but also whether they will keep using it or find an alternative with a competitor. A Qualitest mobile survey in 2017 showed that 88% of App Users Will Abandon Apps Based on Bugs and Glitches.
This can have a huge impact on your business, as users might choose, for example, to switch banks based solely on the app experience and usability.
There’s less room now for UI mistakes, glitches, bugs, slowness – and anything else that can go wrong for a mobile app (and we all know a lot can go wrong). A bad user experience can reflect badly on your company, even if the service or product is good.
According to the survey, 78% of users notice glitches and bugs in the apps that they use, and 29% notice glitches and bugs one or more times per week. Users are becoming less forgiving as they expect more consistency, reliability and quality from flawless apps and a seamless experience. If they don’t get it from you, they might get it from your competitor, so making sure the UX is top notch is getting a higher priority these days.
People at the conference had different approaches to automated QA, even within the same organisation, demonstrating the lack of an enterprise automation strategy.
Some people seemed to be in favour of letting developers do the automation and QA themselves, adding more (and needless) responsibilities to full stack development.
Others recognised the need for intelligent QA-driven automation produced by dedicated testing specialists. This is based on the understanding that developers do not inherently make good testers, not just because of their different mindset but also because they can be too vested in their own code. If you want to make sure your test automation is optimal you need a dedicated, expert tester – no shortcuts here.