Master Apache Spark using Spark SQL and PySpark 3

Master Apache Spark using Spark SQL as well as PySpark with Python3 with complementary lab access

Master Apache Spark using Spark SQL and PySpark 3
Master Apache Spark using Spark SQL and PySpark 3

Master Apache Spark using Spark SQL and PySpark 3 udemy course free download

Master Apache Spark using Spark SQL as well as PySpark with Python3 with complementary lab access

What you'll learn:

Apache Spark 2 and 3 using Python 3 (Formerly CCA 175)

  • All of the HDFS commands can be used to make sure files and folders in HDFS are safe.
  • A quick review of Python that will help you learn Spark.
  • The ability to use Spark SQL to solve the problems in a way that looks like SQL.
  • Pyspark Dataframe APIs can be used to solve problems with Dataframe-style APIs, like in Python.
  • It is important to use the Spark Metastore to turn Dataframes into Temporary Views so that one can use Spark SQL to process data in Dataframes.
  • This is how to make an Apache Spark application.
  • Life Cycle of Apache Spark applications and the Spark UI.
  • Set up an SSH Proxy so that you can get Spark Application logs.
  • Deployment Modes for Spark Apps (Cluster and Client).
  • The process of looking through Application Properties Files and External Dependencies while running Spark Apps.

Requirements:

  • Use any programming language to learn the basics of programming.
  • A self-support lab (instructions are given) or an ITVersity lab at an extra cost can be used in the right environment.
  • The amount of memory you need with the 64-bit operating system depends on the environment you are in.
  • 4 GB of RAM if you have access to the right clusters, or 16 GB of RAM if you use virtual machines like Cloudera QuickStart VM.

Description:

As part of this course, you will learn all the key skills to build Data Engineering Pipelines using Spark SQL and Spark Data Frame APIs using Python as a Programming language. This course used to be a CCA 175 Spark and Hadoop Developer course for the preparation for the Certification Exam. As of 10/31/2021, the exam is sunset and we have renamed it to Apache Spark 2 and Apache Spark 3 using Python 3 as it covers industry-relevant topics beyond the scope of certification.

About Data Engineering

Data Engineering is nothing but processing the data depending upon our downstream needs. We need to build different pipelines such as Batch Pipelines, Streaming Pipelines, etc as part of Data Engineering. All roles related to Data Processing are consolidated under Data Engineering. Conventionally, they are known as ETL Development, Data Warehouse Development, etc. Apache Spark is evolved as a leading technology to take care of Data Engineering at scale.

I have prepared this course for anyone who would like to transition into a Data Engineer role using Pyspark (Python + Spark). I myself am a proven Data Engineering Solution Architect with proven experience in designing solutions using Apache Spark.

Let us go through the details about what you will be learning in this course. Keep in mind that the course is created with a lot of hands-on tasks which will give you enough practice using the right tools. Also, there are tons of tasks and exercises to evaluate yourself. We will provide details about Resources or Environments to learn Spark SQL and PySpark 3 using Python 3 as well as Reference Material on GitHub to practice Spark SQL and PySpark 3 using Python 3. Keep in mind that you can either use the cluster at your workplace or set up the environment using provided instructions or use ITVersity Lab to take this course.

Setup of Single Node Big Data Cluster

Many of you would like to transition to Big Data from Conventional Technologies such as Mainframes, Oracle PL/SQL, etc and you might not have access to Big Data Clusters. It is very important for you set up the environment in the right manner. Don't worry if you do not have the cluster handy, we will guide you through support via Udemy Q&A.

  • Setup Ubuntu-based AWS Cloud9 Instance with the right configuration

  • Ensure Docker is setup

  • Setup Jupyter Lab and other key components

  • Setup and Validate Hadoop, Hive, YARN, and Spark

Are you feeling a bit overwhelmed about setting up the environment? Don't worry!!! We will provide complementary lab access for up to 2 months. Here are the details.

  • Training using an interactive environment. You will get 2 weeks of lab access, to begin with. If you like the environment, and acknowledge it by providing a 5* rating and feedback, the lab access will be extended to additional 6 weeks (2 months). Feel free to send an email to support@itversity.com to get complementary lab access. Also, if your employer provides a multi-node environment, we will help you set up the material for the practice as part of the live session. On top of Q&A Support, we also provide required support via live sessions.

A quick recap of Python

This course requires a decent knowledge of Python. To make sure you understand Spark from a Data Engineering perspective, we added a module to quickly warm up with Python. If you are not familiar with Python, then we suggest you go through our other course Data Engineering Essentials - Python, SQL, and Spark.

Master required Hadoop Skills to build Data Engineering Applications

As part of this section, you will primarily focus on HDFS commands so that we can copy files into HDFS. The data copied into HDFS will be used as part of building data engineering pipelines using Spark and Hadoop with Python as the Programming Language.

  • Overview of HDFS Commands

  • Copy Files into HDFS using the put or copyFromLocal command using appropriate HDFS Commands

  • Review whether the files are copied properly or not to HDFS using HDFS Commands.

  • Get the size of the files using HDFS commands such as du, df, etc.

  • Some fundamental concepts related to HDFS such as block size, replication factor, etc.

Data Engineering using Spark SQL

Let us, deep-dive into Spark SQL to understand how it can be used to build Data Engineering Pipelines. Spark with SQL will provide us the ability to leverage distributed computing capabilities of Spark coupled with easy-to-use developer-friendly SQL-style syntax.

  • Getting Started with Spark SQL

  • Basic Transformations using Spark SQL

  • Managing Tables - Basic DDL and DML in Spark SQL

  • Managing Tables - DML and Create Partitioned Tables using Spark SQL

  • Overview of Spark SQL Functions to manipulate strings, dates, null values, etc

  • Windowing Functions using Spark SQL for ranking, advanced aggregations, etc.

Data Engineering using Spark Data Frame APIs

Spark Data Frame APIs are an alternative way of building Data Engineering applications at scale leveraging distributed computing capabilities of Apache Spark. Data Engineers from application development backgrounds might prefer Data Frame APIs over Spark SQL to build Data Engineering applications.

  • Data Processing Overview using Spark or Pyspark Data Frame APIs.

  • Projecting or Selecting data from Spark Data Frames, renaming columns, providing aliases, dropping columns from Data Frames, etc using Pyspark Data Frame APIs.

  • Processing Column Data using Spark or Pyspark Data Frame APIs - You will be learning functions to manipulate strings, dates, null values, etc.

  • Basic Transformations on Spark Data Frames using Pyspark Data Frame APIs such as Filtering, Aggregations, and Sorting using functions such as filter/where, groupBy with agg, sort or orderBy, etc.

  • Joining Data Sets on Spark Data Frames using Pyspark Data Frame APIs such as join. You will learn inner joins, outer joins, etc using the right examples.

  • Windowing Functions on Spark Data Frames using Pyspark Data Frame APIs to perform advanced Aggregations, Ranking, and Analytic Functions

  • Spark Metastore Databases and Tables and integration between Spark SQL and Data Frame APIs

Apache Spark Application Development and Deployment Life Cycle

Once you go through the content related to Spark using a Jupyter-based environment, we will also walk you through the details about how the Spark applications are typically developed using Python, deployed as well as reviewed.

  • Setup Python Virtual Environment and Project for Spark Application Development using Pycharm

  • Understand complete Spark Application Development Lifecycle using Pycharm and Python

  • Build zip file for the Spark Application, copy to the environment where it is supposed to run and run.

  • Understand how to review the Spark Application Execution Life Cycle.

All the demos are given on our state-of-the-art Big Data cluster. You can avail of one-month complimentary lab access by reaching out to support@itversity.com with a Udemy receipt.

Who this course is for:

Course Details:

  • 30-Day Money-Back Guarantee
  • Full Lifetime Access

Master Apache Spark using Spark SQL and PySpark 3 udemy courses free download

Master Apache Spark using Spark SQL as well as PySpark with Python3 with complementary lab access

Demo Link: https://www.udemy.com/course/spark-sql-and-pyspark3/