Neural Networks & Deep Learning

Mish

A smooth, non-monotonic activation function (x * tanh(softplus(x))) providing better gradients than ReLU.

This concept is essential for understanding neural networks & deep learning and forms a key part of modern AI systems.

  • Activation Function
  • ReLU
  • Swish

Tags

neural-networks-deep-learning activation-function relu swish

Related Terms

Added: November 18, 2025