Abstract
The data processing platforms make use of distributed systems to process and store the big data efficiently. These big data platforms have more than hundreds of configurable parameters, which are currently tuned based on intuition and experience. Finding the optimum values from the set of exponential combinations of such values and relevant parameter selection is a tedious process. In the proposed work, this issue is addressed for three Apache big data platforms, namely Hadoop, Spark and Storm. The most significant parameters shortlisted using various feature selection approaches are tuned. Iterative runs of applications are executed for tuning these parameters and to identify the optimal value to examine the individual impact of resultant parameters. The empirical results depict significant reduction in job execution time of Hadoop and Spark and increase in number of tuples emitted for Storm thus showing the optimised performance of the data platforms.
Subject Classification: