Channel: tanmay bakshi
Category: Science & Technology
Description: In this video, I answer a question about BERT: Should I be pre-training a second time, with domain specific text? Usually, BERT is fine tuned directly on a downstream task. However, sometimes there is a limitation for how much data you can get for this task. Therefore, you may want to consider pre-training BERT in an unsupervised fashion on unlabelled data that relates to the domain in which your labelled data sits. Code: github.com/tanmayb123/BertPreTraining