When models memorize instead of learn. The difference between truly understanding a topic and merely memorizing it shows up the moment a test is changed. People who have only memorized falter on new questions, and AI models do the same. When a model remembers the training data too precisely but learns too little about the underlying patterns, it is said to be overfitting.
Overfitting occurs when a model learns the training data so closely that it also captures noise and random fluctuations instead of just the core signal. It performs almost perfectly on the training set but fails badly on unseen data. These models show low bias (make few simplifying assumptions) but very high variance, with predictions swinging unpredictably when shown new information.
Overfitted models can create a false sense of confidence, which is especially dangerous in areas like medical diagnosis or financial forecasting. Managing overfitting is therefore crucial. Techniques such as regularization, data augmentation, and training on larger, more diverse datasets help models focus on general patterns rather than memorizing every detail.