Large Language Models

Greedy Decoding

Always selecting the most likely next token during generation, fast but can lead to repetitive or suboptimal outputs.

This concept is essential for understanding large language models and forms a key part of modern AI systems.

  • Generation
  • Beam Search
  • Sampling

Tags

large-language-models generation beam-search sampling

Related Terms

Added: November 18, 2025