What is accessibility? – When we say that something is “accessible,” we mean that it is available for use by as many people as possible; it is the ability for anyone and everyone to gain access to a product, device, service, or environment.
What is accessibility testing? - To determine if a software product can be used by people with disabilities, we perform accessibility testing. According to census.gov, 56.7 million people have some sort of disability. That’s nearly one in every five Americans. Do you really want to create a product which is totally unreachable by almost 20% of the US population? Accessibility testing is an absolutely vital aspect of software development and testing.
As the IT industry evolves, it introduces new disciplines to meet new needs. As these disciplines mature, the methods which they implement expand. Audit testing is one of the methods the testing discipline can use to examine a testing process and produce usable feedback with less resource expenditure than a more exhaustive testing effort may require.
It is important to note that testing a data migration should start well in advance of the actual data being migrated. It is quite typical that one of the most important commodities that a business controls, owns, or maintains is its data and therefore any data migration should be considered high risk and should be subjected to significant verification and validation efforts. A business may choose to reduce the risk level against the data migration, but it is always prudent to start off with data migration as a high-priority task to ensure it is successful.
Some applications are easy to test and automate, others are significantly less so. Now, it is a well-known fact in the software development industry that the earlier a bug is found, the cheaper it is to fix. The question, then, is how to find bugs as quickly and efficiently as possible. A good answer is to design and write code in a way that is very friendly to testing. The measure of such friendliness is usually called “testability”, and it can be summed up as four principles.
Efficient management of data used for testing is essential to maximizing return on investment and supplementing the testing efforts for the highest levels of success and coverage. If the data used in testing does not promote ease of use and adaptation, poorly represents the sampled source, or consumes excessive resources for preparation and maintenance, a negative impact on the desired outcome quickly manifests and continues to degrade the quality of results. To balance in favor of positive results and improved returns, consider the process, potential challenges, and possible solutions involved in TDM.
Testing can sometimes become a troublesome and uncontrollable process. It can take more time and money than originally planned, and sometimes still offers insufficient insight into the quality of the test process - possibly putting the quality of the software being tested and business process itself at risk. So what can we do?
Normal functional testing ensures software is working towards what the requirements specify. This can assure our customers that their software will perform according to a given list of requirements or specifications. Security testing is a natural extension of negative testing: it is focused on unacceptable inputs and whether these inputs are likely to create significant failure in regards to the given requirements of the product under test.
QualiTest has lead multiple enterprise level Regression testing efforts as well as test automation efforts integrated into enterprise patch management efforts at midsize and large global organizations. This experience has resulted in the following comprehensive methodology and set of best practices.
Have you run vulnerability scans? Have you factored penetration tests into your test strategy? Have you made sure that security standards are met? Is the final product compliant with the procedures? Do we need penetration tests?” These are questions I get to hear every day at the client site. I worked on a couple of projects last year and both of them required penetration testing, port scans, and vulnerability scans. Initially, I was scared to hear any of these buzzwords but was curious to understand why and how they are important, as these were all requirements in the BRS. With the refinement of the testing process at my client site and engaging performance testers early on project, I was able to make decisions on non-functional testing scope and get the overall test strategy in shape.
Cloud computing refers to the use and access of multiple server-based computational resources via a digital network. In cloud computing, applications are provided and managed by the cloud server and data is also stored remotely in the cloud configuration. Users do not download and install applications on their own device or computer; all processing and storage is maintained by the cloud server.
How about Disaster Recovery with cloud computing? They seem to be the perfect match. All of the servers are virtualized, instead of just backing up the data; we can now back up the entire server off-site. It’s easy to take a snapshot of the server every night, send it off-site, and then that entire server can be spun up fairly quickly. This system has many possibilities and opportunities, but there are also issues and risks inherent to it.
No matter the project and no matter how much work you’ve done in the test environment, additional testing will always be necessary in the live environment before and after the product’s launch date. Can we avoid this? No, we probably can’t. So what are the options for getting rid of this data? Connecting to a test database which is a copy of the live one, even if it is only for a short period of time, is not really doable. How about if we run scripts at regular intervals to delete the test data? Obviously, there is a very real need to remove such data.
Test data, if left untouched and unhandled, may interfere with the client’s business intelligence. The interference may be quite significant for a more dynamic site that undergoes frequent maintenance and upgrades and as a result has more test data on it. So what should we do?
The aim of testing is to establish if requirements have or have not been met for specific products or services. The independent test team should be audacious enough to question everything about the requirements to develop a common understanding and knowledge base.