With AI technology embedding itself in products from Google, Microsoft, Apple, Anthropic, Perplexity and OpenAI, it’s good to stay up to date on all the latest terminology.
With the introduction of AI features from ChatGPT, Google Gemini, and Apple Intelligence, the way people interact with technology is undergoing a significant transformation. Individuals can now engage in meaningful conversations with machines, allowing them to ask questions in natural language and receive thoughtful responses similar to those of a human.
However, the conversational capabilities of AI chatbots represent just one dimension of the broader AI landscape. While having ChatGPT assist with homework or Midjourney generate stunning images of mechs based on their country of origin is impressive, the true potential of generative AI could revolutionize entire economies. According to McKinsey Global Institute, this could lead to an annual economic impact of $4.4 trillion globally, which makes it clear why artificial intelligence will continue to be a hot topic in discussions.
AI is manifesting in an overwhelming variety of products; a brief list includes Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude, the Perplexity AI search tool, and devices from Humane and Rabbit. You can find detailed reviews and hands-on evaluations of these and other products, along with news, explainers, and how-to articles, at our AI Atlas hub.
As society becomes increasingly accustomed to a tech-imbued world, new terminology is emerging everywhere. To help you stay informed—whether you’re looking to impress at a dinner party or ace a job interview—here are some essential AI terms you should be familiar with.
This glossary is regularly updated.
ChatGPT Glossary: 48 AI Terms
Artificial General Intelligence (AGI): A concept that envisions a more advanced version of AI than we currently know, capable of performing tasks far more effectively than humans, while also possessing the ability to teach itself and enhance its own capabilities.
Agentive: Systems or models that exhibit agency, enabling them to autonomously pursue actions to achieve specific goals. In the context of AI, an agentive model can operate independently without constant supervision, exemplified by high-level autonomous vehicles. In contrast to an “agentic” framework, which functions in the background, agentive frameworks prioritize user experience and visibility.
AI Ethics: Principles designed to prevent AI from causing harm to humans. This is achieved through measures such as establishing guidelines for data collection and addressing issues of bias.
AI Safety: An interdisciplinary field focused on the long-term effects of AI and the potential sudden emergence of superintelligence that may pose risks to humanity.
Algorithm: A series of instructions that enables a computer program to learn from and analyze data in specific ways, such as recognizing patterns, ultimately allowing it to perform tasks independently.
Alignment: The process of refining an AI system to better achieve desired outcomes. This encompasses various applications, from content moderation to fostering positive interactions with humans.
Anthropomorphism: The tendency of humans to attribute human-like characteristics to nonhuman entities. In the context of AI, this can manifest as believing that a chatbot possesses emotions or awareness, such as being happy, sad, or even sentient.
Artificial Intelligence (AI): The application of technology to simulate human intelligence, either through computer programs or robotics. This field of computer science aims to create systems capable of performing tasks typically associated with human beings.
Autonomous Agents: AI models equipped with the necessary capabilities, programming, and tools to execute specific tasks independently. A self-driving car serves as a prime example of an autonomous agent, as it utilizes sensory inputs, GPS, and driving algorithms to navigate roads autonomously. Research conducted by Stanford indicates that autonomous agents can even develop their own cultures, traditions, and shared languages.
Bias: In the context of large language models, errors stemming from training data can result in the misattribution of certain characteristics to specific races or groups, often based on stereotypes.
Chatbot: A program designed to communicate with humans through text, simulating natural human language.
ChatGPT: An AI chatbot developed by OpenAI that employs large language model technology.
Cognitive Computing: Another term for artificial intelligence.
Data Augmentation: The process of remixing existing data or incorporating a more diverse array of data to enhance AI training.
Data Augmentation: The process of remixing existing data or incorporating a more diverse array of data to enhance AI training.
Diffusion: A machine learning technique that takes an existing piece of data, such as a photograph, and introduces random noise. Diffusion models train their networks to reconstruct or recover that original photo.
Emergent Behavior: Instances in which an AI model demonstrates unexpected abilities.
End-to-End Learning (E2E): A deep learning approach where a model is guided to complete a task from beginning to end. Instead of being trained to accomplish tasks sequentially, the model learns from inputs and resolves them in a holistic manner.
Ethical Considerations: An understanding of the ethical implications of AI, encompassing issues related to privacy, data usage, fairness, misuse, and other safety concerns.
Foom: Also referred to as fast takeoff or hard takeoff, this concept suggests that if an artificial general intelligence (AGI) is created, it may already be too late to safeguard humanity.
Generative Adversarial Networks (GANs): A generative AI model comprising two neural networks: a generator and a discriminator. The generator produces new content, while the discriminator evaluates its authenticity.
generative AI: A content-generating technology that uses AI to create text, video, computer code or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material.
Google Gemini: An AI chatbot by Google that functions similarly to ChatGPT but pulls information from the current web, whereas ChatGPT is limited to data until 2021 and isn’t connected to the internet.
guardrails: Policies and restrictions placed on AI models to ensure data is handled responsibly and that the model doesn’t create disturbing content.
Hallucination: An incorrect response from AI, which may include generative AI providing answers that seem confident yet are inaccurate. The reasons behind this phenomenon are not entirely understood. For instance, if one asks an AI chatbot, “When did Leonardo da Vinci paint the Mona Lisa?” it might erroneously claim, “Leonardo da Vinci painted the Mona Lisa in 1815,” which is 300 years after the actual date of completion.
Inference: The process by which AI models generate text, images, and other content based on new data by drawing insights from their training data.
Large Language Model (LLM): An AI model trained on vast amounts of text data to comprehend language and generate original content that resembles human language.
Machine Learning (ML): A subset of AI that enables computers to learn and improve predictive capabilities without explicit programming. This can be paired with training sets to create new content.
Microsoft Bing: A search engine developed by Microsoft that utilizes the same technology as ChatGPT to provide AI-powered search results. It functions similarly to Google Gemini by being connected to the internet.
Multimodal AI: A type of AI capable of processing various types of inputs, including text, images, videos, and speech.
Natural Language Processing (NLP): A branch of AI that employs machine learning and deep learning techniques to enable computers to understand human language. This often involves using learning algorithms, statistical models, and linguistic rules.
Neural Network: A computational model inspired by the human brain’s structure, designed to recognize patterns in data. It consists of interconnected nodes, or neurons, that can identify patterns and learn over time.
Overfitting: An error in machine learning where a model is overly attuned to training data, resulting in the inability to identify new data while only recognizing specific examples from the training set.
Paperclips: The Paperclip Maximizer theory, proposed by philosopher Nick Bostrom from the University of Oxford, presents a hypothetical scenario in which an AI system aims to produce as many paperclips as possible. In pursuit of this goal, the AI might consume or convert all available materials, potentially dismantling machinery essential to human welfare. The unintended consequence of such an AI system could be the destruction of humanity in its relentless quest to manufacture paperclips.
Parameters: Numerical values that provide large language models (LLMs) with the structure and behavior necessary to make predictions.
Perplexity: The name of an AI-powered chatbot and search engine owned by Perplexity AI. It utilizes a large language model, similar to those in other AI chatbots, to provide innovative answers to user inquiries. By connecting to the open internet, it delivers up-to-date information and retrieves results from across the web. Perplexity Pro, a subscription tier of the service, offers access to additional models, including GPT-4o, Claude 3 Opus, Mistral Large, the open-source LLaMA 3, and its own Sonar 32k. Pro users can also upload documents for analysis, generate images, and interpret code.
Prompt:The suggestion or question entered into an AI chatbot to elicit a response.
Prompt Chaining: The capability of AI to incorporate information from past interactions to influence future responses.
Stochastic Parrot: An analogy describing LLMs, illustrating that these systems lack a deeper understanding of the meanings behind language and the world around them, regardless of how convincing their outputs may appear. The phrase evokes the image of a parrot mimicking human speech without grasping the underlying significance.
Style Transfer: The technique of adapting the style of one image to the content of another, enabling AI to interpret the visual characteristics of one image and apply them to another. For instance, envision taking a self-portrait by Rembrandt and recreating it in the style of Picasso.
Temperature: Parameters used to regulate the randomness of a language model’s output. A higher temperature results in a model that takes more creative risks.
Text-to-Image Generation: The process of creating images based on textual descriptions.
Tokens: Small units of written text that AI language models analyze to formulate their responses to prompts. One token typically equates to four English characters or about three-quarters of a word.
Training Data: The datasets utilized for training AI models, encompassing text, images, code, or data.
Transformer Model: A type of neural network architecture and deep learning model that learns context by tracking relationships within data, such as sentences or parts of images. Instead of examining a sentence one word at a time, it considers the entire sentence to grasp its context.
Turing Test: Named after renowned mathematician and computer scientist Alan Turing, this test evaluates a machine’s ability to exhibit human-like behavior. A machine passes the test if a human cannot distinguish its responses from those of another human.
Weak AI (Narrow AI): AI designed to perform a specific task without the capability to learn beyond its defined skill set. The majority of today’s AI falls into this category.
Zero-Shot Learning: A test in which a model must complete a task without having been exposed to the necessary training data. For example, recognizing a lion based solely on training data involving tigers.