Agile, DevOps, Lean, Scrum: these are all applicable to software development frameworks we talk about these days. With a derisive sneer, we tend to refer...
New methodologies continue being created to improve software production, while benefits increase and deficits decrease. However, methodologies always fall down at the human element. As...
Vendors of Point of Sale (POS) software, system integrators and retailers who develop or customize packaged solutions should make implementing test automation a priority in their testing organizations. Not only is automating a cash register possible, it turns out that POS provides an excellent candidate for test automation, where a high degree of coverage and a good return on investment can be quickly achieved.
“Internet of Things” (IoT) is the network or associations between those Internet-connected objects (smart devices) that are able to exchange information by using an agreed method and data schema. Expertise requires knowledge of communication and other protocols, hardware trade-offs, software coding, Big Data impact, security, user experience and the high demands of end users and regulators. These combine into a perfect storm, presenting established and new challenges regarding QA in general, and particularly testing in the IoT ecosystem. This article will highlight the challenges as well as address potential strategies and solutions.
As Scrum is the most popular framework adopted by organization adopting an Agile approach for project management, many companies are trying to find financial facts that justify its adoption. This article discusses the topic of evaluating the return on investment (ROI) of using Scrum and suggests some hints about mistakes to avoid and on how to get meaningful results from this activity.
This white paper will explore the components of SOA, then focus in particular on the services and service bus aspects of the architecture. These two areas are where QualiTest’s approach to SOA testing is differentiated, offering unique advantages to your organization.
Exploratory software testing is a powerful and fun approach to testing. In some situations, it can be orders of magnitude more productive than scripted testing. I haven't found a tester yet who didn't, at least unconsciously, perform exploratory testing at one time or another. Yet few of us study this approach, and it does not get much respect in our field. It's high time we stop the denial, and publicly recognize the exploratory approach for what it is: scientific thinking in real-time. Friends, that's a good thing.
What is agile software development, and what changes does it require of a tester? How does a tester become more effective in an agile environment? This white paper runs through the evolution of software development and compares agile techniques to Orange project management methodologies, as well as discussing whether the SMaRT methodology is relevant in an agile environment.
Stephen R Covey’s 1989 book The Seven Habits of Highly Effective People has helped millions establish great habits for achieving true interdependent effectiveness in their lives and jobs. This article discusses these habits as they apply to highly effective scrum masters:
Begin with the End in Mind
Put First Things First
Seek First to Understand, Then to be Understood
Sharpen the Saw
According to ISEB, the test process “comprises planning, specification, execution, recording, checking for completion and test closure activities.” In ensuring that this objective is met, rigorous testing of the product will have to take place either manually, by automation, or even both ways in order to eliminate any defects. In a perfect world (making the tester’s life far too easy), all software will be delivered to test with no defects whatsoever. All the tests can be carried out with no faults ever detected and there would be no need to upset the responsible developer. However, this ideal world is unfortunately a fantasy; although on rare occasions tester may find themselves in a position where no defects are found, in the vast majority of cases this is simply not so. That is why it is so important to have a defect management process in place: that way, when defects are inevitably detected, the testers know exactly how to identify and manage them, streamlining the testing process and increasing its efficienc
Even many experienced professionals can have trouble determining exactly how long the testing process is going to take from end to end. Because it can depend so heavily on the specifics of each individual project, test estimation must be performed for every system independently of others before testing can commence. The estimation process is a complex one which contributes to the length, cost, and quality of a finished project – so how is it determined?
System Breakdowns are an integral part of thorough testing, but they are vastly underutilized in the IT industry, to the detriment of many development projects. Because of the intensive nature of the system map they provide, performing a system breakdown leads to a deeper understanding of the system in production, making both development and testing easier on everyone involved.
To test a system's capability to handle faults, recovery testing is performed. This testing approach forces the software to fail and verifies that recovery is properly performed. It is essential for any mission critical system (Example – FDA class 3 products, defense systems, etc.) The importance of these systems is such that that by their nature they impose a strict protocol of how the system should behave in case of a failure. Other examples include large financial systems, banking, logistics and many more.
Results Based Testing (RBT) is a new software testing pricing model that sets forth the expected value to be delivered by the Testing Teams.
Imagine your testing provider making a commitment to find 97% of the defects in the system. Can that be measured? What would that be worth? What would be the impact on the customer-provider relationships and accountability? How can you bind that commitment? This white paper will attempt to answer those questions.
It seems well-accepted that it is cheaper to find defects earlier in the software development lifecycle than during dynamic testing or in live operation. I don’t need to include a graph of the cost escalation curve here; we’ve all seen it before. This rule can be applied all walks of life – putting up a garden shed, building a house, buying a car, running a marathon – you name it and the adage “find problems early to save pain and aggravation later” always applies.
Some people say that requirements are about what you build, and design is about how you build it. This simplistic statement may sound right, but there are two potential problems with condensing the matter in this way. First, it makes it sound like there should be a sharp boundary between requirements and design, when that’s not really the case. In reality, the region between the two is actually very grey and foggy, not a crisp line at all. I prefer to say that requirements should concentrate on what and design should concentrate on how. It is important to explore possible designs that might satisfy certain requirements; a valuable method for assessing the clarity, correctness, completeness, and feasibility of these is through prototyping. Getting customer feedback on prototypes helps ensure you’re on the right track.
It can be a real challenge to reduce regression testing, because it is a vital testing procedure that seeks out software errors by retesting the entire software system. The whole purpose is to ensure that no additional errors were introduced during the process of fixing problems or while introducing new features. Most companies spend around 40-60% of their test execution effort on regression testing. This is an important aspect to ensuring the customer’s experience will stay the same as previous release, or even better.
This article provides examples of how we, as a test team, dealt with the problems and describes mistakes we made and lessons learned. Obviously, there are many more ways to deal with problems. Where relevant, we will share with you the different thoughts we had before making a decision.
Over the years, many organizations have invested heavily in creating or deploying project management frameworks. PRINCE 2 is a widely-adopted project management methodology, which was developed by a UK government agency and is used extensively within the UK as the management standard for its public projects. The origins of PRINCE 2 began when the sequential “waterfall” method for delivering software projects was the dominant paradigm. However, regular revolution and increasing competition in the IT industry results in organizations seeking a more flexible management method; some of them already turned to Agile practices in an effort to improve their response to business changes. Well, do the traditional project management techniques like PRINCE 2 add value to an Agile environment? Is there any way we could combine these two methodologies to work together
If you've been taught that the software testing process must include a complete set of detailed test documentation, or that an explicitly-defined set of expected results must accompany each step in a testing plan, you should read this white paper.
Though to many testers the difference between functional testing and usability testing seems obvious, it is actually something that can stump many professionals in our industry. Both are a vital part of the testing process; one would think that we would spend more time determining, discussing, and teaching the difference between them than we actually do.
When we discuss test analysis, we do so in reference to the test basis that collection of documents, standards, and attitudes that we use to guide our testing. Unfortunately, it seems that for some testers the test basis begins and ends with the requirements as captured by a business analyst or a requirements analyst. Many testers seem reluctant to push back on the requirements, to look beyond what is written down. This is understandable, as many project managers are even more reluctant to extend project scope and many developers are wary of what they see as tester-driven development; we are not all used to having our opinions listened to and respected.
Test estimation is a forecast of the projected cost and duration of testing which is agreed upon between the testers and enterprise which requires testing. It is a means for discerning information which will need to be fed back in to the business. It is often said that testing is a Risk Mitigation exercise, but as the testing process itself cannot mitigate risk, it would be a truer statement to describe the testing function as a tool by which the business gleans information enough to mitigate risk. It’s likely that most of those reading this white paper will fall into the bracket of estimating in accordance with what should be done as a set of testing tasks. This is not incorrect; however, to only consider testing estimation against what one believes to be the correct list of testing tasks and depth of testing is an incorrect way of approaching test estimation.
Behavior Driven Development is an Agile software development technique focused on improving a key factor in the successful development of any software product: Communication. It is a technique devised by Dan North as a response to the issues he encountered whilst teaching Test Driven Development. Eric Evans describes it this way in his book Domain Driven Design: "A project faces serious problems when its language is fractured. Domain experts use their jargon while technical team members have their own language tuned for discussing the domain in terms of design... Across this linguistic divide, the domain experts vaguely describe what they want. Developers, struggling to understand a domain new to them, vaguely understand."
In this article, I have attempted to bring together ideas on inspecting and analyzing requirements based on reading techniques as procedural approaches encompassing common checklist methods to achieve better defect detection rates as early as possible in the development lifecycle.
Many of us have experienced projects that drag on much longer than expected and cost more than planned. Companies looking to improve their software development processes are now exploring how Agile can help their Enterprise more reliably deliver software quickly, iteratively and with a feature set that hits that mark. While Agile has different "flavors", Scrum is one process for implementing Agile. This Article will discuss the Agile Scrum process and will end with variants of Scrum that can be used to aid in improving your software releases.
A mixture of “crowd” and “outsourcing,” Merriam-Webster defines crowdsourcing as “the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers.” In crowdsource testing, this term applies to the action of using the educated masses to contribute to your testing process. This process had clear advantages, such as reaching a wider range of testers and a potentially higher ROI for the testing process. However, there are certainly disadvantages as well, such as difficulties in confidentiality and communication between all parties involved.
The most well known definition of exploratory testing (ET) was coined by James Bach and Cem Kaner:
Exploratory testing is simultaneous learning, test design, and test execution. This definition of exploratory testing was the prevailing definition for many years. The new descriptive definition says:
“Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of each tester to continually optimize the value of his work. This is done by treating learning, test design, and test execution as mutually supportive activities that run in parallel throughout the project.”