Member-only story
What are Drawbacks of Beam Search?
There are a lot of deep explanations elsewhere so here I’d like to share some example questions in an interview setting.
What are some of the drawbacks of “beam search decoding” in neural machine translation?
Here are some example answers for readers’ reference:
The drawbacks of this method are:
1. It penalizes long sequences (since it’s the multiplication of a bunch of conditional probabilities), so you should normalize by the sentence length;
2. Computationally expensive and consumes a lot of memory.
Watch the explanation by Dr. Younes Bensouda Mourri from Deeplearning.ai: (This video also covers how beam search works, so a good refresher!)
Happy practicing!
Thanks for reading my newsletter. You can follow me on Linkedin!
Note: There are different angles to answer an interview question. The author of this newsletter does not try to find a reference that answers a question exhaustively. Rather, the author would like to share some quick insights and help the readers to think, practice and do further research as necessary.
Source of video/answers: Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 8 — Translation, Seq2Seq, Attention by Dr. Abby See Natural Language Processing with Attention Models by Deeplearning.ai
Source of images: Medium. Foundations of NLP Explained Visually: Beam Search, How It Works by Ketan Doshi