Tag Archives: online hadoop training

What’s Next for Apache Hadoop Data Management and Governance

Hadoop – the data processing engine based on MapReduce – is being superceded by new processing engines: Apache Tez, Apache Storm, Apache Spark and others. YARN makes any data processing future possible. But Hadoop the platform – thanks to YARN as its architectural center – is the future for data management, with a selection of […]

The Importance of Apache Drill to the Big Data Ecosystem

You might be wondering what bearing a history lesson may have on a technology project such as Apache Drill. In order to truly appreciate Apache Drill, it is important to understand the history of the projects in this space, as well as the design principles and the goals of its implementation. The lessons that have been […]

How SQOOP-1272 Can Help You Move Big Data from Mainframe to Apache Hadoop

Apache Sqoop provides a framework to move data between HDFS and relational databases in a parallel fashion using Hadoop’s MR framework. As Hadoop becomes more popular in enterprises, there is a growing need to move data from non-relational sources like mainframe datasets to Hadoop. Following are possible reasons for this: HDFS is used simply as an […]

Kudu: New Apache Hadoop Storage for Fast Analytics on Fast Data

The set of data storage and processing technologies that define the Apache Hadoop ecosystem are expansive and ever-improving, covering a very diverse set of customer use cases used in mission-critical enterprise applications. At Cloudera, we’re constantly pushing the boundaries of what’s possible with Hadoop—making it faster, easier to work with, and more secure. Cloudera, the […]

Introduction to HDFS Erasure Coding in Apache Hadoop

Hadoop is a popular open-source implementation of MapReduce framework designed to analyze large data sets. It has two parts; Hadoop Distributed File System (HDFS) and MapReduce. HDFS is the file system used by Hadoop to store its data. It has become popular due to its reliability, scalability, and low-cost storage capability. HDFS by default replicates […]

Drill into Your Big Data Today with Apache Drill

Big data techniques are becoming mainstream in an increasing number of businesses, but how do people get self-service, interactive access to their big data? And how do they do this without having to train their SQL-literate employees to be advanced developers? One solution is to take advantage of the rapidly maturing open source, open community […]

How-to: Deploy Apache Hadoop Clusters Like a Boss

The HDFS docs have some information, and logically it makes sense to separate the network of the Hadoop nodes from a “management” network. However, in our experience, multi-homed networks can be tricky to configure and support. The pain stems from Hadoop integrating with a large ecosystem of components that all have their own network and […]

Hadoop advantages and disadvantages

Advantages of Hadoop: 1. Scalable Hadoop is a highly scalable storage platform, because it can stores and distribute very large data sets across hundreds of inexpensive servers that operate in parallel. Unlike traditional relational database systems (RDBMS) that can’t scale to process large amounts of data, Hadoop enables businesses to run applications on thousands of nodes […]

How to install Hadoop?

Prerequisites Supported Platforms GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. Win32 is supported as a development platform. Distributed operation has not been well tested on Win32, so it is not supported as a production platform. Required Software Required software for Linux and Windows include: JavaTM 1.6.x, […]

Hadoop Admin responsibilities

Hadoop Admin Responsibilities: Responsible for implementation and ongoing administration of Hadoop infrastructure. Aligning with the system engineering team to propose and deploy new hardwares and software environments required for Hadoop and to expand existing environments. Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos […]

Comparison of Hadoop with SQL and Oracle database

Basically the difference is that Hadoop is not a database at all. Hadoop is basically a distributed file system (HDFS) – Hadoop lets you store a large amount of file data on a cloud machines, handling data redundancy etc. Comparing SQL databases and Hadoop: Hadoop is a framework for processing data, what makes it better […]

Five Must Read Books on Hadoop

    Looking for hadoop books? We have shortlisted best hadoop books. 1.Hadoop: The Definitive Guide (By: Tom White ) This is the best book for hadoop beginners. This is a best source to adapt you to the world of big data management. 2.Hadoop in Practice (By: Alex Holmes ) This book discuss about the advanced […]

What are the pre-requisites for big data hadoop?

  Working directly with Java APIs can be tedious and error prone. It also restricts usage of Hadoop to Java programmers. Hadoop offers two solutions for making Hadoop programming easier.Pig is a programming language that simplifies the common tasks of working with Hadoop: loading data, expressing transformations on the data, and storing the final results. […]

Who can become a hadoop professional?

System administrators can learn some Java skills as well as cloud services management skills to start working with Hadoop installation and operations. DBAs and ETL data architects can learn Apache Pig and related technologies to develop, operate, and optimize the massive data flows going into the Hadoop system. BI analysts and data analysts can learn SQL and Hive […]