Hadoop MapReduce has been proved to be an efficient model for distributed data processing. This model is widely used by different service providers, which create a challenge of maintaining same efficiency and performance level in different systems. One of the most critical problems for this model is how to overcome heterogeneity and scalability in different systems. The decreases of performance in heterogeneous environment occur due to inefficient scheduling of Map and Reduce tasks. Another important problem is how to minimize master node overhead and network traffic created by scheduling algorithm. In this paper, we introduce a lightweight adaptive scheduler in which we provide the classifier with information about jobs requirement and node capabilities. The scheduler classifies jobs into executable and nonexecutable according to the nodes capabilities. Then the scheduler assigns the tasks to appropriate nodes in the cluster to get highest performance. © Springer Science+Business Media Singapore 2017.