This repository contains a small-scale experimental study analyzing how common regularization techniques affect CNN generalization.
- Baseline CNN (no regularization)
- CNN with Batch Normalization and Dropout
- CNN with Early Stopping
- CIFAR-10
- Baseline models overfit quickly
- Batch normalization stabilizes training
- Early stopping prevents overfitting without architectural changes
Install dependencies:
This project focuses on understanding training dynamics rather than maximizing benchmark accuracy.