Feature-based Transfer Learning vs Fine Tuning?

Angelina Yang
2 min readOct 3, 2022

There are a lot of deep explanations elsewhere so here I’d like to share some example questions in an interview setting.

What’s the difference between feature-based transfer learning vs. fine tuning?

Deeplearning.ai: Transfer learning in NLP

Here are some example answers for readers’ reference:

Two methods that you can use for transfer learning are the following:

In feature based transfer learning, you can train word embeddings by running a different model and then using those features (i.e. word vectors) on a different task.

When fine tuning, you can use the exact same model and just run it on a different task. Sometimes when fine tuning, you can keep the model weights fixed and just add a new layer that you will train. Other times you can slowly unfreeze the layers one at a time. You can also use unlabelled data when pre-training, by masking words and trying to predict which word was masked.

For example, in the drawing above we try to predict the word “friend”. This allows your model to get a grasp of the overall structure of the data and to help the model learn some relationships within the words of a sentence.

Happy practicing!

Thanks for reading my newsletter. You can follow me on Linkedin!

Note: There are different angles to answer an interview question. The author of this newsletter does not try to find a reference that answers a question exhaustively. Rather, the author would like to share some quick insights and help the readers to think, practice and do further research as necessary.

Source of video/answers: Transfer Learning (C3W2L07) by Deeplearning.ai

Source of images: Transfer Learning in NLP by Dr.Younes Bensouda Mourri from Deeplearning.ai
Transfer Learning — Machine Learning’s Next Frontier by Sebastian Ruder