Du er ikke logget ind
Beskrivelse
Take your deep learning models more adaptable with these practical regularisation techniques. For data scientists, machine learning engineers, and researchers with basic model development experience who want to improve their training efficiency and avoid overfitting errors.
Regularization in Deep Learning delivers practical techniques to help you build more general and adaptable deep learning models. It goes beyond basic techniques like data augmentation and explores strategies for architecture, objective function, and optimisation.
You will turn regularisation theory into practice using PyTorch, following guided implementations that you can easily adapt and customise to your own model's needs.
Key features include:
Insights into model generalisabilityA holistic overview of regularisation techniques and strategiesClassical and modern views of generalisation, including bias and variance tradeoffWhen and where to use different regularisation techniquesThe background knowledge you need to understand cutting-edge research Along the way, you will get just enough of the theory and mathematics behind regularisation to understand the new research emerging in this important area.
About the technology Deep learning models that generate highly accurate results on their training data can struggle with messy real-world test datasets. Regularisation strategies help overcome these errors with techniques that help your models handle noisy data and changing requirements. By learning to tweak training data and loss functions, and employ other regularisation approaches, you can ensure a model delivers excellent generalised performance and avoid overfitting errors.