Developer Training for Apache Spark and Hadoop
Learn how to import and process data with key Hadoop ecosystem tools
This four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark (including Spark Streaming and Spark SQL), Flume, Kafka, and Sqoop, this training course is the best preparation for the real-world challenges faced by Hadoop developers. With Spark, developers can write sophisticated parallel applications to execute faster decisions, better decisions, and interactive actions, applied to a wide variety of use cases, architectures, and industries.
Prerequisites
This course is designed for developers and engineers who have programming experience, but prior knowledge of Hadoop is not required.
- Apache Spark examples and hands-on exercises are presented in Scala and Python. The ability to program in one of those languages is required
- Basic familiarity with the Linux command line is assumed
- Basic knowledge of SQL is helpful
Key Features
- LIVE Instructor-led Classes
- 24x7 on-demand technical support for assignments, queries, quizzes, project, etc.
- Flexibility to attend the class at your convenient time.
- Server Access to Massive's Tech Management System until you get into your dream carrier.
- A huge database of Interview Questions
- Professional Resume Preparation
- Earn a Skill Certificate
- Enroll today and get the advantage.
Curriculum
- Apache Hadoop Overview
- Data Storage and Ingest
- Data Processing
- Data Analysis and Exploration
- Other Ecosystem Tools
- Introduction to the Hands-On Exercises
- Problems with Traditional Large-Scale Systems
- HDFS Architecture
- Using HDFS
- Apache Hadoop File Formats
- YARN Architecture
- Working With YARN
- Apache Sqoop Overview
- Importing Data
- Importing File Options
- Exporting Data
- What is Apache Spark?
- Using the Spark Shell
- RDDs (Resilient Distributed Datasets)
- Functional Programming in Spark
- Creating RDDs
- Other General RDD Operations
- Key-Value Pair RDDs
- Map-Reduce
- Other Pair RDD Operations
- Spark Applications vs. Spark Shell
- Creating the SparkContext
- Building a Spark Application (Scala and Java)
- Running a Spark Application
- The Spark Application Web UI
- Configuring Spark Properties
- Logging
- Review: Apache Spark on a Cluster
- RDD Partitions
- Partitioning of File-Based RDDs
- HDFS and Data Locality
- Executing Parallel Operations
- Stages and Tasks
- RDD Lineage
- RDD Persistence Overview
- Distributed Persistence
- Common Apache Spark Use Cases
- Iterative Algorithms in Apache Spark
- Machine Learning
- Example: k-means
- Apache Spark SQL and the SQL Context
- Creating DataFrames
- Transforming and Querying DataFrames
- Saving DataFrames
- DataFrames and RDDs
- Comparing Apache Spark SQL, Impala, and Hive-on-Spark
- Apache Spark SQL in Spark 2.x C
- What is Apache Kafka?
- Apache Kafka Overview
- Scaling Apache Kafka
- Apache Kafka Cluster Architecture
- Apache Kafka Command Line Tools
- What is Apache Flume?
- Basic Flume Architecture
- Flume Sources
- Flume Sinks
- Flume Channels
- Flume Configuration
- Overview
- Use Cases
- Configuration
- Apache Spark Streaming Overview
- Example: Streaming Request Count
- DStreams
- Developing Streaming Applications Apache Spark Streaming: Processing Multiple Batches
- Multi-Batch Operations
- Time Slicing
- State Operations
- Sliding Window Operations
- Streaming Data Source Overview
- Apache Flume and Apache Kafka Data Sources
- Example: Using a Kafka Direct Data Source
Have Any Questions?
We are happy to answer any questions and we appreciate every feedback about our work!