Practice on data modeling and building ETLs using python and loading the transformed data on 2 kinds of DBMS, one using PosgreSQL and the other using Apache Cassandra (NoSQL). It was part of Udacity's Data Engineering Nanodegree.
A startup called Sparkify wants to analyze the data they've been collecting on songs and user activity on their new music streaming app. The analytics team is particularly interested in understanding what songs users are listening to. Currently, they don't have an easy way to query their data, which resides in a directory of JSON logs on user activity on the app, as well as a directory with JSON metadata on the songs in their app.
They'd like a data engineer to create a Postgres database with tables designed to optimize queries on song play analysis, and bring you on the project. Your role is to create a database schema and ETL pipeline for this analysis. You'll be able to test your database and ETL pipeline by running queries given to you by the analytics team from Sparkify and compare your results with their expected results.
- Understand what is ETL, and the differnce between ETL and ELT.
- Understand the difference between the SQL and NoSQL and when to use each of them.
- How to choose the best schema (star, snowflake, galaxy, ..etc) for your data to model your datawarehouse on.
- How to setup Apache Cassandra and local access the db engine for executing my queries using python drivers for that.
Note: You will find the details of each project and the instructions for the project on their directories.