While Hadoop 1.0 (the current distributions) is driving the world with increasing speed, Hadoop 2.0 has already made debut with a bigger promise of overcoming some of the limitations of Hadoop 1.0 like scalability, cluster utilisation, agility and data processing without Map Reduce.
Hadoop 1.0 does what it promises brilliantly. Map Reduce is like the backbone of Hadoop 1.0. It is very good for batch processing but not much of help for real time and near-real time processing. Again to make a job work, it has to be or converted to be a Map Reduce job. Map Reduce is great for certain types of works but does not fit for all. In terms resource management Map Reduce and Hadoop 1.0 does not guarantee 100% or effective utilisation.