Talking about the major ecosystem of Hadoop the first name which comes to mind is MapReduce as this is the base on which complete Hadoop framework relies. Also the processing on data can be done using MapReduce algorithm which contributes a big name for processing of data in Hadoop. For writing this MapReduce algorithm, two algorithms are basically written in java and they are map algorithm and reduce algorithm respectively which jointly construct the algorithm framework.
But when this discussion comes in picture many of us are confused whether to process data in Hadoop we have only java to learn and cant the non coders perform any data processing operation on it. If you are also thinking the same then let me say that you are wrong. Hadoop ecosystem has got a lot of components as well for example pig and hive. Pig is used to process the data using pig Latin scripts which is totally operator based and quite easy to learn. Whereas for those people whose have affinity for query language can use hive query language (hql) for data processing, as hql is almost like sql. Or in other words it can be said that hive and pig contribute as a major data processing components in the ecosystem. So this is a myth which most of the people have that without java data cannot be processed in Hadoop.
Also for import of data from relational data stores like MySQL and OracleSQL to Hadoop we have sqoop which is basically used to take a bulk of data from these relational data stores and put in Hadoop so that Hadoop methodologies can be applied on it to achieve a faster processing. Another question comes to mind is- but is this really needed? The absolute answer is that of course, it is needed as we have a huge amount of structured data in these data stores as we were working on these technologies from a long time and now if we are shifting to Hadoop just because of its processing capabilities, we cannot simply ignore the useful data which we have with us from so many years. Whereas same can be done to export the data back to these data stores from Hadoop using sqoop.
On the other hand a big demand in the industry today is to collect online data which can be done through flume which is another major component of Hadoop ecosystem. Even flume is needed as today we get many data from online sources in streaming format for example it can be any of web server present remotely, for this reason we need flume so that this online data can be collected efficiently.
Few more important components of ecosystem are oozie, zookeeper etc., but it really doesn’t matters if you are excellent with these components to get an entry in Hadoop field. Also zookeeper hands on is needed more for the one who is seeking a job as Hadoop Admin. But yes you should have a brief of their working so that you don’t miss the working of these important contributors of Hadoop cluster.