In this two-part learning session, we discuss best practices around data partitioning and working with imbalanced datasets.
Five-fold cross-validation is often the silver bullet for partitioning your validation dataset, but there are some dangerous caveats you have to be aware of to make sure that you’re building robust models. In this learning session (part 1) , we talk about those pitfalls and outline strategies for handling them.
Binary target variables are very common in data science use cases, many of which are severely imbalanced. When you’re building models for infrequent events, such as predicting fraud or identifying product failures, it’s important to watch out for imbalance in your data. (In part 2 of this learning session we discuss strategies for working with imbalanced datasets and provide some rules-of-thumb for these types of use cases.)