[SOLVED] What is tuning in machine learning?

Issue

I am a novice learner of machine learning and am confused by tuning.
What is the purpose of tuning in machine learning? To select the best parameters for an algorithm?
How does tuning works?

Solution

Without getting into a technical demonstration that would seem appropriate for Stackoverflow, here are some general thoughts. Essentially, one can argue that the ultimate goal of machine learning is to make a machine system that can automatically build models from data without requiring tedious and time consuming human involvement. As you recognize, one of the difficulties is that learning algorithms (eg. decision trees, random forests, clustering techniques, etc.) require you to set parameters before you use the models (or at least to set constraints on those parameters). How you set those parameters can depend on a whole host of factors. That said, your goal, is usually to set those parameters to so optimal values that enable you to complete a learning task in the best way possible. Thus, tuning an algorithm or machine learning technique, can be simply thought of as process which one goes through in which they optimize the parameters that impact the model in order to enable the algorithm to perform the best (once, of course you have defined what “best” actual is).

To make it more concrete, here are a few examples. If you take a machine learning algorithm for clustering like KNN, you will note that you, as the programmer, must specify the number of K’s in your model (or centroids), that are used. How do you do this? You tune the model. There are many ways that you can do this. One of these can be trying many many different values of K for a model, and looking to understand how the inter and intra group error as you very the number of K’s in your model.

As another example, let us consider say support vector machine (SVM) classication. SVM classification requires an initial learning phase in which the training data are used
to adjust the classication parameters. This really refers to an initial parameter tuning phase where you, as the programmer, might try to “tune” the models in order to achieve high quality results.

Now, you might be thinking that this process can be difficult, and you are right. In fact, because of the difficulty of determining what optimal model parameters are, some researchers use complex learning algorithms before experimenting adequately with simpler alternatives with better tuned parameters.

Answered By – Nathaniel Payne

Answer Checked By – Mildred Charles (BugsFixing Admin)

Leave a Reply

Your email address will not be published. Required fields are marked *