Du er ikke logget ind
Beskrivelse
Deep learning excels at extracting complex patterns but faces catastrophic forgetting when fine-tuned on new data. This book investigates how class- and domain-incremental learning affect neural networks for automated driving, identifying semantic shifts and feature changes as key factors. Tools for quantitatively measuring forgetting are selected and used to show how strategies like image augmentation, pretraining, and architectural adaptations mitigate catastrophic forgetting.