Glossary

Key terms and concepts used throughout AhaSignals research. Each term is defined concisely and organized by topic cluster for easy reference.

AI + Psychology

Cognitive Offloading
The process of using external systems or tools to reduce internal cognitive demands, freeing mental resources for other tasks. This occurs when individuals delegate memory, decision-making, or processing tasks to external aids.
Binary Closure Signal
A feedback mechanism that provides only two possible states (e.g., done/not-done, yes/no), creating unambiguous decision outcomes. These signals eliminate intermediate states and reduce cognitive load.
Decision Entropy
The measure of uncertainty or ambiguity in a decision space. Higher entropy indicates more cognitive load required to resolve the decision. Binary systems minimize entropy by reducing options to two clear states.
Closure Signal
A psychological cue that indicates task completion or decision finality, providing mental relief and allowing cognitive resources to be reallocated. Effective closure signals are immediate, unambiguous, and definitive.

AI + Finance

Aha Alpha
Excess returns generated by identifying and acting on patterns that trigger sudden insights or realizations in market participants. These signals emerge at the intersection of pattern recognition and behavioral insight.
Cognitive Signal
Observable behavioral patterns in market data that indicate collective psychological states or decision-making processes. These signals can precede price movements and represent market participant cognition.
Pattern Recognition
The automated identification of regularities, correlations, or structures in data using AI algorithms. In financial contexts, this involves detecting non-obvious relationships that may indicate trading opportunities.
Behavioral Cascade
The phenomenon where insight moments or decisions spread through market participants via social learning and information diffusion, creating predictable patterns in collective behavior and price movements.

General Methodology

A3P-L (AI-Augmented Academic Production - Lean)
A six-stage research methodology that uses AI to generate competing hypotheses while maintaining human oversight and transparency. Stages include question framing, parallel hypothesis generation, disagreement extraction, confidence tagging, editorial review, and public disclosure.
C-SNR (Cognitive Signal-to-Noise Ratio)
A quantitative metric (0-1) measuring claim reliability based on external evidence, model consistency, and logic coherence. Higher C-SNR indicates stronger support for a claim. Used to tag confidence levels in research.
Structured Disagreement
A systematic mapping of where competing hypotheses align, conflict, or diverge. This approach makes uncertainty explicit and prevents single-model bias by documenting areas of theoretical disagreement.
Confidence Level
A categorical assessment of claim reliability: "Well-supported" (C-SNR ≥ 0.75), "Conceptually plausible" (C-SNR ≥ 0.50), or "Speculative" (C-SNR < 0.50). Each level indicates the strength of evidence and model agreement.
Verifiable Claim
A research assertion that can be tested against external evidence, empirical data, or established theory. Distinguished from inferential claims, which extend beyond direct verification.
Inferential Claim
A research assertion that extends beyond direct verification, involving logical inference, theoretical extrapolation, or predictive reasoning. These claims typically have lower confidence levels than verifiable claims.
Competing Models
Multiple explanatory frameworks generated from different perspectives (mechanism, behavior, system) that offer incompatible explanations for the same phenomenon. Used in A3P-L to avoid single-model bias.
Noise Model
An explicit documentation of uncertainty sources in research, including algorithmic bias, theoretical assumptions, evidence weaknesses, and logic gaps. Makes limitations transparent rather than hidden.
Research Integrity Block
A standardized disclosure section in research articles stating that multiple models were evaluated, disagreements are documented, claims are confidence-tagged, and no single model is treated as authoritative.