Member-only story

A Refresher on Batch Normalization!

Angelina Yang
3 min readOct 16, 2022

--

There are a lot of deep explanations elsewhere so here I’d like to share some example questions in an interview setting.

What is “batch normalization” for neural network models? How to implement it? And, what are some of the advantages and disadvantages of using it?

Source: In-layer normalization techniques for training very deep neural networks

Here are some example answers for readers’ reference:

When inputting data to a deep learning model, it is standard practice to normalize the data to zero mean and unit variance. It was introduced in the Batch Normalization paper, and was recognized as being transformational in creating deeper neural networks that could be trained faster.

During training, we feed the network one mini-batch of data at a time. During the forward pass, each layer of the network processes that mini-batch of data. The Batch Norm layer processes its data as follows:

Source: Batch Norm Explained Visually

--

--

No responses yet