About 317,000 results
Open links in new tab
  1. 读懂BERT,看这一篇就够了 - 知乎

    BERT (Bidirectional Encoder Representation from Transformers)是2018年10月由Google AI研究院提出的一种预训练模型,该模型在机器阅读理解顶级水平测试 SQuAD1.1 中表现出惊人的成绩: 全部两个 …

  2. BERT 系列模型 | 菜鸟教程

    BERT (Bidirectional Encoder Representations from Transformers)是2018年由Google提出的革命性自然语言处理模型,它彻底改变了NLP领域的研究和应用范式。

  3. 万字长文,带你搞懂什么是BERT模型(非常详细)看这一篇就够了!-C…

    Oct 26, 2024 · 问:BERT 有什么用? BERT 用于执行 NLP 任务,如文本表示、命名实体识别、文本分类、问答系统、机器翻译、文本摘要等。

  4. BERT (language model) - Wikipedia

    Next sentence prediction (NSP): In this task, BERT is trained to predict whether one sentence logically follows another. For example, given two sentences, "The cat sat on the mat" and "It was a sunny …

  5. BERT: Pre-training of Deep Bidirectional Transformers for Language ...

    Oct 11, 2018 · Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context …

  6. BERT】详解BERT - 彼得虫 - 博客园

    Jun 15, 2024 · BERT,全称Bidirectional Encoder Representation of Transformer,首次提出于《BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding》一文中。

  7. 保姆级【BERT模型】不愧是李沐大神,连小白也能听懂的 bert模型原 …

    比刷剧还爽! AI/NLP/transforme共计26条视频,包括:1.BERT代码P1、零基础入门AI人工智能学习路线图P2、2.BERT预训练数据代码等,UP主更多精彩视频,请关注UP账号。

  8. BERT模型介绍-腾讯云开发者社区-腾讯云

    Dec 25, 2024 · BERT(Bidirectional Encoder Representations from Transformers)是Google在2018年提出的一种预训练语言模型,它在自然语言处理(NLP)领域引起了广泛的关注和应用。

  9. What Is the BERT Model and How Does It Work? - Coursera

    Jul 23, 2025 · BERT is a deep learning language model designed to improve the efficiency of natural language processing (NLP) tasks. It is famous for its ability to consider context by analyzing the …

  10. What is BERT? NLP Model Explained - Snowflake

    Discover what BERT is and how it works. Explore BERT model architecture, algorithm, and impact on AI, NLP tasks and the evolution of large language models.