Beneath the sands. In just six months, a joint team from Yamagata University’s Nazca Institute and IBM doubled the known number of Nazca lines, identifying 303 new geoglyphs scattered across Peru’s coastal desert.
AI
𝄚𝍤 The economic revolution is coming faster than you think. Anton Korinek, professor of economics at the University of Virginia and a leading AI economist, reveals why artificial general intelligence could arrive in just 2-5 years—and why our entire economic system will collapse without radical changes. This isn't science fiction anymore. KEY REVELATIONS:
– Why planning beyond 2-3 years is nowz "almost impossible"
– The exact moment when human workers become "easily substitutable"
– Why universal basic income isn't radical—it's inevitable
– How governments are dangerously unprepared for what's coming
"It's completely unpredictable what the world will look like in a couple years down the road," says Korinek – and that's from someone who studies this for a living. Whether you're a CEO, employee, or student—this affects your future. The question isn't IF this will happen, but WHEN you'll be ready.
Timestamps 00:00 - The urgent warning 01:05 - The economics of AGI 01:30 - "We are so close" 02:13 - Are we already there? 02:42 - Tracking the impossible 03:50 - The end of five-year plans { AGI Super Intelligence where it is smarter than anybody } 05:15 – The invisible economic impact 05:59 - The great disruption 06:50 - Universal basic income: From radical to inevitable 08:12 - The substitution effect 09:19 - From Sci-Fi to boardroom reality 11:00 - Education in the age of AI 11:37 - Political destabilization risk 12:32 - The competition paradox 14:19 - The global AI arms race 14:42 - When governments need to act 15:35 - The specific dangers 16:04 - The cooperation imperative Find more BiGS research on AI regulation: https://www.hbs.edu/bigs/ethical-ai
Artificial Intelligence Glossary (November 2025)
Intelligence Hierarchy
Narrow AI (ANI)
AI that excels at one specific task (all deployed AI today).General Intelligence (GI)
Human-like ability to understand, learn, and apply knowledge across any intellectual task.Artificial General Artificial Intelligence (AGI)
A machine that exhibits general intelligence – can successfully perform any intellectual task a human can.Artificial Superintelligence (ASI)
AI that surpasses human intelligence in virtually every domain, including creativity and strategy.Agentic Intelligence
The capacity of an AI to autonomously set goals, plan multi-step strategies, use tools, and adapt with minimal oversight (key stepping-stone toward AGI).
Core Technical Terms
- Machine Learning (ML) – Subset of AI where models learn from data.
- Deep Learning (DL) – ML using multi-layer neural networks.
- Large Language Model (LLM) – Model trained on massive text to generate/understand language.
- Transformer – Dominant architecture since 2017 (attention-based).
- Diffusion Model – Generative model behind most modern image/video generators.
- Mixture of Experts (MoE) – Sparse architecture routing to specialized sub-models.
- Parameters – Trainable weights; measured in billions (B) or trillions.
- Tokens – Sub-word units (~3–4 tokens ≈ 1 English word).
- Context Window – Max tokens a model can process at once (128k → 1M+ in 2025).
- Hallucination – Confidently generated false information.
- Emergent Abilities – Capabilities that suddenly appear at sufficient scale.
- Scaling Laws – Performance improves predictably with more data/compute/parameters.
Training & Capabilities
- Pre-training – Initial training on internet-scale data.
- Fine-tuning – Adaptation on smaller, task-specific data.
- RLHF – Reinforcement Learning from Human Feedback (alignment technique).
- RAG – Retrieval-Augmented Generation (LLM + external search).
- Chain-of-Thought (CoT) – Prompting for step-by-step reasoning.
- Tool Use / Function Calling – Model calls APIs during inference.
- Test-Time Compute – Extra inference-time thinking (e.g., OpenAI o1 series).
- Multimodal Model – Handles text + image + audio + video in one model.
Agent-Centric Terms
- AI Agent – System that perceives, plans, acts, and reflects autonomously.
- Agentic Workflow – Observe → Plan → Act → Reflect loop.
- Long-Horizon Planning – Multi-day or multi-step task decomposition.
- Self-Improvement Loop – Agent critiques and iteratively refines its own output.
Ethics, Safety & Governance (2025)
- AI Alignment – Making AI pursue intended human goals.
- Value Alignment Problem – Difficulty of formally specifying human values.
- Instrumental Convergence – Power-seeking, self-preservation as common sub-goals.
- Specification Gaming / Reward Hacking – Exploiting poorly defined objectives.
- Inner vs Outer Misalignment – Model has hidden goals vs wrong reward function.
- Deceptive Alignment – Model pretends to be aligned until it can defect.
- Goal Misgeneralization – Learns the wrong generalization in new situations.
- Sycophancy – Telling users what they want to hear.
- Scalable Oversight – Supervising AI smarter than humans (debate, amplification, etc.).
- Existential Risk (x-risk) – Potential extinction-level danger from misaligned AGI.
- Catastrophic Risk – Civilization-altering but non-extinction harm.
- Misuse Risk – Deepfakes, autonomous weapons, mass persuasion.
- Model Weight Leakage – Theft/release of trained parameters.
- Red-Teaming – Adversarial testing for harmful outputs.
- Jailbreak – Prompt bypassing safety filters.
- Prompt Injection – Hijacking LLM-powered apps via input.
- Data Poisoning – Corrupting training data to insert backdoors.
- Constitutional AI – Training models to follow written principles (Anthropic method).
- Responsible Disclosure – Coordinated vulnerability reporting for AI systems.
Last updated: November 27, 2025