AI Dictionary
Beginner· ~2 min read#ai#fundamentals#beginner

Artificial Intelligence

AI

The broad field of building machines that mimic human-like learning, reasoning, and decision-making.

ARTIFICIAL INTELLIGENCEMACHINE LEARNINGDEEP LEARNINGeach contains the next
Definition

Artificial Intelligence is the umbrella term for any technique that lets machines exhibit human-like intelligent behavior — learning, reasoning, problem-solving, language understanding, vision.

Inside AI sits Machine Learning (ML); inside ML sits Deep Learning. LLMs, vision models, robotic control — all live inside this hierarchy. So every LLM is AI, but not every AI is an LLM.

Historically there are two schools: symbolic AI (rule-based, dominant in the 1960s-90s) and statistical/learned AI (data-driven, dominant today). The modern AI boom is the fruit of the second — and especially of deep learning.

Analogy

Saying "AI" is like saying "transportation." Bikes, cars, planes, rockets — all transportation, but very different things. AI covers everything from a hand-coded spam filter rule, to ChatGPT, to self-driving car vision. "We use AI" by itself tells you nothing.

Real-world example

A bank's fraud-detection system: - Rule layer (classic AI): "single transaction > $10K + abroad + 3 AM" → flag. - ML layer: a model trained on 10M past transactions spots subtle patterns no rule could express. - LLM layer: analyzes the conversation when the customer calls in.

All in the same system. All under the "AI" umbrella. Technically very different things.

When to use
  • Finding patterns in data (ML)
  • Decisions at a scale humans can't match (automated recommendations, pricing)
  • Building natural-language or visual interfaces (assistants, OCR, translation)
  • Automating repetitive cognitive tasks (summarization, classification)
When not to use
  • Tasks a simple if-else solves — overengineering
  • Decisions that must be 100% explainable (regulation, law)
  • When you have no data — saying 'we'll AI it' doesn't create data
Common pitfalls

Treating AI = LLM

AI is huge. Throwing an LLM at every problem is expensive and often wrong. XGBoost for classification, CNN for vision, ARIMA for time series — still the right call in many cases.

No data, no model

Good AI = good data + suitable model. Skipping the data question and jumping to model selection is the classic mistake. Data quality beats model complexity, every time.

Falling for the hype

Every company says 'we do AI'. Whether the product actually uses AI or is just an if-else under the hood — worth asking.