Disclaimer: If you sign up for a (paid) course using this link, R-exercises earns a commission. It does not impact what you pay for a course, and helps us to keep R-exercises free.
As data scientists and analysts we face constant repetitive task when approaching new data sets. This class aims at automating a lot of these tasks in order to get to the actual analysis as quickly as possible. Of course, there will always be exceptions to the rule, some manual work and customization will be required. But overall a large swath of that work can be automated by building a smart pipeline. This is what we’ll do here. This is especially important in the era of big data where handling variables by hand isn’t always possible.
It is also a great learning strategy to think in terms of a processing pipeline and to understand, design and build each stage as separate and independent units.
What are the requirements?
- Basic understanding of R programming
- Some statistical and modeling knowledge
What am I going to get from this course?
- Build a pipeline to automate the processing of raw data for discovery and modeling
- Know the main steps to prepare data for modeling
- Know how to handle the different data types in R
- Understand data imputation
- Treat categorical data properly with binarization (making dummy columns)
- Apply feature engineering to dates, integers and real numbers
- Apply variable selection, correlation and significance tests
- Model and measure prepared data using both supervised and unsupervised modeling
Who is the target audience?
- Interest and need to process raw data for exploration and modeling in R