Skip to content

PySpark: EDA, Feature Selection, Feature Engineering, Pipelines, Classification (baseline model and hyperparameter tuning with CrossValidator)

License

Notifications You must be signed in to change notification settings

jose-jpm-magalhaes/PySpark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 

Repository files navigation

PySpark

For an interactive preview (with all the charts displayed):

  • You can edit and run the notebook by importing it into your Databricks account. After clicking Workspace select Import from any folder's menu and paste the URL that shows up → here

  • If you don't have a Databricks account, set up your Databricks account and create a workspace → .

    • If you want a previous detailed explanation of how it works → click here

  • PySpark is the ‘product’ of the collaboration beteween Apache Spark and Python.
  • PySpark is the Python API for Apache Spark, an open source distributed computing framework that provides some of the most popular tools used to carry out common Big Data related tasks.

Aim: To create a ML model with PySpark that predicts which passengers survived the sinking of the Titanic.

Considering that the Titanic ML competition is almost legendary and that almost everyone (competitor or non-competitor) that tried to tackle the challenge did it either with python or R, I decided to use Pyspark having run a notebook in Databricks to show how easy can be to work with PySpark, namely regarding:

  • EDA
  • Feature Selection
  • Feature Engineering
  • Train-Test Split (within the training set)
  • Pipelines
  • Classification and Evaluation: a baseline model and hyperparameter tuning with Crossvalidator

About

PySpark: EDA, Feature Selection, Feature Engineering, Pipelines, Classification (baseline model and hyperparameter tuning with CrossValidator)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published