WHAT IS HYPERPARAMETER TUNING?
The settings or knobs that can be adjusted before starting a training task to regulate how an ML algorithm behaves are known as hyperparameters. In terms of training time, infrastructure resource requirements (and consequent cost), model convergence, and model correctness, they can have a significant impact on model training.
Hyperparameter tuning is the process of finding the perfect model architecture by adjusting the parameters that determine the model architecture, or hyperparameters.
WHAT IS HYPERPARAMETER TUNING?<br/>
The settings or knobs that can be adjusted before starting a training task to regulate how an ML algorithm behaves are known as hyperparameters. In terms of training time, infrastructure resource requirements (and consequent cost), model convergence, and model correctness, they can have a significant impact on model training.<br/> Hyperparameter tuning is the process of finding the perfect model architecture by adjusting the parameters that determine the model architecture, or hyperparameters.
WHEN HYPERPARAMETER TUNING IS USED? <br/>
We will be given design options when developing a machine learning model on how to specify your model architecture. We'd like to be able to consider a variety of options because, frequently, we are not sure what the best model architecture should be for a certain model. We'll ideally ask the computer to conduct this investigation and choose the best model architecture on its own, in a true machine learning style. <br/> Whenever a trade-off such as Bias-variance Trade-off or Precision recall trade-off is said to be achieved with a given model architecture, hyperparameter tuning comes to the rescue to train a model that would neither overfit nor underfit.
WHERE HYPERPARAMETER TUNING IS USED? <br/>
Hyperparameter tuning techniques are extensively used in different supervised machine learning algorithms, unsupervised machine learning algorithms and reinforced machine learning algorithms.
WHO USES HYPERPARAMETER TUNING?
In a rudimentary data science project pipeline that is comprised of data engineers, data analysts, and machine learning engineers, the task of hyperparameter tuning fundamentally fall upon a machine learning engineer. The machine learning engineer is responsible for model evaluation and model metrics
WHY HYPERPARAMETER TUNING IS USED? <br/>
It is uncommon for a model to perform at the standard you require for production on the first try. You frequently need to go through an iterative cycle in order to find the best answer to your business problem. The proposed machine learning puzzle can be solved by combining several elements. You might need to undertake feature engineering several times or even add more data, train and evaluate numerous models using various data setups and algorithms, or all of the above. During this cycle, your model's hyperparameters must also be adjusted.
HOW HYPERPARAMETERS ARE TUNED? <br/>
Hyperparameters can be tuned using brute force implementation techniques such as the Elbow method or even by tracking accuracy or errors in Cross-validation techniques such as K-Fold cross-validation. We also have core hyperparameter tuning techniques(/algorithms) namely GridSearchCV, and RandomizedSearchCV.
HOW MANY HYPERPARAMETERS ARE TUNED?<br/>
The number of hyperparameters will purely depend on the specific machine learning algorithm that we would be training the data on. We know that machine learning is a domain that derives the fundamental concepts from a diverse stack of domains like physics, mathematics, Statistics, Probability theory, and Information theory and this makes each algorithm fundamentally different. Hence the number of parameters to be tuned for algorithms could range from 0 as in Simple linear regression to 9 in a Random Forest classifier or could be even more given any algorithm.