Large Language Models
BERT
Bidirectional Encoder Representations from Transformers - a model that understands context by looking at text from both directions.
Related Concepts
- Transformer: Explore how Transformer relates to BERT
- Encoder-Only: Explore how Encoder-Only relates to BERT
- Masked Language Modeling: Explore how Masked Language Modeling relates to BERT
- Bidirectional: Explore how Bidirectional relates to BERT
Why It Matters
Understanding BERT is crucial for anyone working with large language models. This concept helps build a foundation for more advanced topics in AI and machine learning.
Learn More
This term is part of the comprehensive AI/ML glossary. Explore related terms to deepen your understanding of this interconnected field.
Tags
large-language-models transformer encoder-only masked-language-modeling
Related Terms
Masked Language Modeling
A pre-training objective where random tokens are masked and the model learns to predict them from context.
Transformer
A neural network architecture introduced in 'Attention is All You Need' (2017) that relies entirely on self-attention mechanisms, becoming the foundation for modern LLMs.