Reference
AI Glossary
Plain-language definitions for the AI terms that actually matter, with practical context on why each one is relevant.
23 terms found in Models & Platforms
AI systems that find patterns, anomalies, and insights in large datasets, used for tasks like fraud detection, medical imaging analysis, and business intelligence.
A broad field of computer science focused on building systems that can perform tasks normally requiring human intelligence, such as understanding language, recognizing patterns, and making decisions.
A subset of machine learning that uses neural networks with many layers to learn increasingly abstract representations of data, powering breakthroughs in language, vision, and generation.
A generative AI architecture that creates images, video, and other media by learning to gradually remove noise from random static until a coherent output emerges.
A subset of AI where systems learn patterns from data rather than following explicitly programmed rules, improving their performance as they see more examples.
A neural network architecture that scales model capacity efficiently by routing each input through only a small subset of specialized sub-networks ("experts"), keeping compute costs manageable even as total model size grows.
AI systems that can process, understand, and generate across multiple types of data — text, images, audio, video, and code — within a single model.
The initial training phase where a language model learns general language patterns from a massive text corpus, before being fine-tuned for specific tasks or behaviors.
AI systems that forecast outcomes based on historical data patterns, used for tasks like demand forecasting, risk assessment, and recommendation engines.
A training technique where human preferences are used to fine-tune a language model through reinforcement learning, teaching it to produce responses that humans judge as helpful, accurate, and safe.
A machine learning approach where an agent learns by taking actions in an environment and receiving rewards or penalties, gradually discovering which strategies produce the best outcomes.
A compact AI language model — typically under 10 billion parameters — designed to run efficiently on edge devices and single GPUs while delivering strong task-specific performance.
A machine learning approach where the model learns from labeled examples — input-output pairs where the correct answer is provided during training.
The basic unit of text that a language model processes — typically a word, subword, or punctuation mark, roughly equivalent to 3/4 of an English word.
The dataset used to teach a machine learning model, containing the examples and patterns the model learns to recognize and reproduce.
A neural network architecture that powers modern AI by processing entire input sequences simultaneously through an attention mechanism, rather than reading them word by word.