Skip to content

Commit eeca567

Browse files
committed
Fix typo in BERT notebook
Signed-off-by: Rajeev Rao <rajeevrao@nvidia.com>
1 parent 718a92f commit eeca567

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

demo/BERT/notebooks/BERT-TRT-FP16.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
"# BERT QA Inference with TensorRT FP16\n",
3434
"\n",
3535
"\n",
36-
"Bidirectional Embedding Representations from Transformers ([BERT](https://arxiv.org/abs/1810.04805)) is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. \n",
36+
"Bidirectional Encoder Representations from Transformers ([BERT](https://arxiv.org/abs/1810.04805)) is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. \n",
3737
"\n",
3838
"BERT provided a leap in accuracy for NLU tasks that brought high-quality language-based services within the reach of companies across many industries. To use the model in production, you need to consider factors such as latency, in addition to accuracy, which influences end user satisfaction with a service. BERT requires significant compute during inference due to its 12/24-layer stacked multi-head attention network. This has posed a challenge for companies to deploy BERT as part of real-time applications until now.\n",
3939
"\n",

0 commit comments

Comments
 (0)