Training

Learn about big data from our certified experts to gain a competitive advantage

Our classes are briefings on big data tools and technologies, designed for management and business people. Attendees will learn about big data concepts and familiarize themselves with the technology trends and opportunities. These popular classes provide guidance on how to apply the right technological business criteria to the adoption of big data in your organization, providing you with a licence to impress your peers with your new and innovative vision.

They will enable you to:

  • Understand big data and how it can be applied to store, manage, process and analyze massive amounts of unstructured and poly-structured data.

  • Explore the latest technologies underpinning big data, including Hadoop and NoSQL.

  • Determine how big data systems can complement traditional data warehousing and business intelligence solutions and processes.

  • Utilize big data to differentiate your business and provide better service to your customers.

  • Examine real-world case studies of how big data is influencing society and businesses.

Big Data Concepts and Hadoop Essentials Course

Overview

This session is designed to help attendees understand the concepts and benefits of big data and Apache Hadoop and how this technology can help them meet their business goals. Topics include the Apache Hadoop technology stack such as HDFS MapReduce, YARN, Hive, HBase and Spark.

Duration

1 day

Who is the course for

Anyone who is looking at adopting a big data solution, anyone involved in data-driven business change, and everyone who needs an overview of the Apache Hadoop technology ecosystem.

Prerequisites

None

Course Outline

  • Concepts surrounding big data and analytics
  • Real world examples of how data is impacting business
  • Technology challenges at big data scale
  • How Apache Hadoop works and supports big data, analytics and business transformation
  • Common Apache Hadoop tools: HDFS, YARN, Sqoop, Pig, Hive Spark, Hbase
  • Introduction to common Hadoop distributions such as Cloudera, Hortonworks, MapR

HDP Developer: Apache Pig and Hive

Overview

This course is designed for developers who need to create applications to analyse big data stored in Apache Hadoop using Pig and Hive. Topics include: Hadoop, YARN, HDFS, MapReduce, data ingestion, workflow definition and using Pig and Hive to perform data analytics on big data and an introduction to Spark Core and Spark SQL.

Duration

4 days

Who is the course for

Software developers who need to understand and develop applications for Hadoop.

Prerequisites

Attendees should be familiar with programming principles and have experience in software development. SQL knowledge is also helpful. No prior Hadoop knowledge is required.

What you will learn

  • Describe Hadoop, YARN and use cases for Hadoop
  • Describe Hadoop ecosystem tools and frameworks
  • Describe the HDFS architecture
  • Use the Hadoop client to input data into HDFS
  • Transfer data between Hadoop and a relational database
  • Explain YARN and MapReduce architectures
  • Run a MapReduce job on YARN
  • Use Pig to explore and transform data in HDFS
  • Use Hive to explore and analyse data sets
  • Understand how Hive tables are defined and implemented
  • Use the new Hive windowing functions
  • Explain and use the various Hive file formats
  • Create and populate a Hive table that uses ORC file formats
  • Use Hive to run SQL-like queries to perform data analysis
  • Use Hive to join datasets using a variety of techniques
  • Write efficient Hive queries
  • Perform data analytics using the DataFu Pig library
  • Explain the uses and purpose of HCatalog
  • Use HCatalog with Pig and Hive
  • Define and schedule an Oozie workflow

Course Outline

Hands-on Labs

  • Use HDFS commands to add/remove files and folders
  • Use Sqoop to transfer data between HDFS and a RDBMS
  • Run MapReduce and YARN application jobs
  • Explore, transform, split and join datasets using Pig
  • Use Pig to transform and export a dataset for use with Hive
  • Use HCatLoader and HCatStorer
  • Use Hive to discover useful information in a dataset
  • Describe how Hive queries get executed as MapReduce jobs
  • Perform a join of two datasets with Hive
  • Use advanced Hive features: windowing, views, ORC files
  • Use Hive analytics functions
  • Analyse clickstream data and compute quantiles with DataFu
  • Define an Oozie workflow

Format

  • 50% Lecture/Discussion
  • 50% Hands-on Labs

Apache Hadoop Essentials

Overview

This course provides a technical overview of Apache Hadoop. It includes high-level information about concepts, architecture, operation, and uses of the Hortonworks Data Platform (HDP) and the Hadoop ecosystem. The course provides an optional primer for those who plan to attend a hands-on, instructor-led course.

Duration

1 day

Who is the course for?

Data architects, data integration architects, managers, C-level executives, decision makers, technical infrastructure team, and Hadoop administrators or developers who want to understand the fundamentals of big data and the Hadoop ecosystem.

Prerequisites

No previous Hadoop or programming knowledge is required. Students will need browser access to the Internet.

Course Objectives

  • Describe the use case for Hadoop
  • Idenfity Hadoop Ecosystem architectural categories
  • Data Management
  • Data Access
  • Data Governance and Integration
  • Security
  • Operations
  • Detail the HDFS architecture
  • Describe data ingestion options and frameworks for batch and real-time streaming
  • Explain the fundamentals of parallel processing
  • See popular data transformation and processing engines in action
  • Apache Hive
  • Apache Pig
  • Spark
  • Detail the architecture and features of YARN
  • Describe how to secure Hadoop

Course Outline

  • Operational overview with Ambari
  • Loading data into HDFS
  • Data manipulation with Hive
  • Risk Analysis with Pig
  • Risk Analysis with Spark and Zeppelin
  • Securing Hive with Ranger