A Blog from QualiTest

How to Test Big Data Systems

Big Data is perceived as a huge amount of data and information but it is a lot more than this. Big Data may be said to be a whole set of approach, tools and methods of processing large volumes of unstructured as well as structured data. The three parameters on which Big Data is defined i.e. Volume, Variety and Velocity describes how you have to process an enormous amount of data in different formats at different rates.

Big Data is perceived as a huge amount of data and information but it is a lot more than this. Big Data may be said to be a whole set of approach, tools and methods of processing large volumes of unstructured as well as structured data. The three parameters on which Big Data is defined i.e. Volume, Variety and Velocity describes how you have to process an enormous amount of data in different formats at different rates.

Traditional analysis techniques have certain limitations while dealing with such large size of data owing to its complexity. Hence testing Big Data can be quite a challenge for organizations which have very little knowledge with regard to what to test and how to test. Before moving on to how testing is performed in Big Data systems, let’s take a look at the basic aspects of Big Data processing on the basis of which further testing procedure can be determined.

 

Aspects of Big Data testing:

1. Validation of structured and unstructured data
2. Optimal test environment
3. Dealing with non-relational databases
4. Performing non-functional testing

Failure in the above-mentioned things may result in the production of poor quality of data, delays in testing and increased cost of testing.

Big Data Testing can be performed in two ways i.e. functional and nonfunctional testing. A very strong test data and test environment management are required to ensure error-free processing of data.

big data 3

A. FUNCTIONAL TESTING

Functional Testing is performed in three stages namely,

1. Pre-Hadoop Process Testing
2. MapReduce Process Validation
3. Extract-Transform-Load Process Validation and Report Testing

 

 

1. Pre-Hadoop Process Testing: HDFS lets you store huge amount of data on a cloud of machines. When the data is extracted from various sources such as web logs, social media, RDBMS, etc., and uploaded into HDFS (Hadoop Distributed File System), an initial stage of testing is carried out as mentioned below.

● Verification of the data acquired from the original source to check if it is corrupted or not
● Validation of data files if they were uploaded into correct HDFS location
● Checking the file partition and then copying them to different data units
● Determination of a complete set of data to be checked
● Verification of synchronicity of the source data with that of the data uploaded into HDFS


2. MapReduce Process Validation: MapReduce Processing is a data processing concept used to compress the massive amount of data into practical aggregated compact data packets.

● Testing of business logic first on a single node then on a set of nodes or multiple nodes
● Validation of the MapReduce process to ensure the correct generation of the “key-value” pair
● After the “reduce” operation, validation of aggregation and consolidation of data
● Comparison of the output generated data with the input files to make sure the generated output file meets all the requirements


3. ETL Process Validation and Report Testing: ETL stands for Extraction, Transformation, and Load testing approach. This is the last stage of testing in the queue where data generated by the previous stage is first unloaded and then loaded into the downstream repository system i.e. Enterprise Data Warehouse (EDW) where reports are generated or a transactional system analysis is done for further processing.

● To check the correct application of transformation rules
● Inspection of data aggregation to ensure there is no distortion of data and it is loaded into the target system
● To ensure there is no data corruption by comparing with the HDFS file system data
● Validation of reports that include the required data and all indicators are displayed correctly

 

B. NON-FUNCTIONAL TESTING

Since Hadoop processes large chunks of data of varying variety and speed, it becomes imperative to perform architectural testing of the Big Data systems to ensure the success of your project. This non-functional testing is performed in two ways, i.e. Performance Testing and Failover Testing.

 

1. Performance Testing
Performance Testing performs the testing of job completion time, memory utilization and data throughput of the Big Data system. The main objective of performance testing is not restricted to only an acknowledgment of application performance but to improve the performance of the Big Data system as whole too. It is performed as follows:

● Obtain the metrics of performance of Big Data systems i.e. response time, maximum data processing capacity, speed of data consumption,etc.
● Determine conditions which cause performance problems i.e. assessing performance limiting conditions
● Verification of speed with which MapReduce processing(sorts, merges) is executed
● Verification of storage of data at different nodes
● Test JVM Parameters such as heap size, GC Collection Algorithms, etc.
● Test the values for connection timeout, query timeout, etc.

2. Failover Testing: Failover testing is done to verify seamless processing of data in case of failure of data nodes. It validates the recovery process and the processing of data when switched to other data nodes. Two types of metrics are observed during this testing i.e.

● Recovery Time Objective
● Recovery Point Objective

 

CONCLUSION

Many big firms including cloud enablers and various project management tools platforms are using Big Data and the main challenge faced by such organizations today is how to test Big Data and how to improve the performance and processing power of Big Data systems. To ensure all is working well, the data extracted and processed is undistorted and in sync with the original data, above-mentioned testing procedures are performed. Big Data processing could be batch, real-time or interactive hence when dealing with such huge amount of data, Big Data testing becomes imperative as well as inevitable.

Author Bio:
Swati Panwar is a Technical Content Writer. Writing is her passion and she believes one day she will change the world with her words. She is a technical writer by day and an insatiable reader at night. Her love for technology and the latest digital trends can be seen in her write-ups. Besides this, she is also fond of poetry. She’s extremely empathic towards animals and when not writing, she can be found cuddling with her cat.

 


How to Test Big Data Systems Presentation