Where Will AI Take Us in Five Years?

Where Will AI Take Us in Five Years?

We are currently living through the "hype hangover" of the artificial intelligence boom. A little over three years after ChatGPT’s late-2022 release, the initial shock of fluent…

Reading time 11 min read

We are currently living through the "hype hangover" of the artificial intelligence boom. A little over three years after ChatGPT’s late-2022 release, the initial shock of fluent machine text has faded, replaced by the grinding reality of integration challenges, copyright lawsuits, and hallucination rates that refuse to hit zero. Yet, despite the cooling of public frenzy, the technological temperature remains critically high. We are not returning to the status quo; we are simply acclimating to a new baseline of disruption. The question is no longer "What can this tech do?" but "What will it actually change by 2031?"Predicting the future of AI is notoriously difficult because it requires expertise in two divergent fields: computer science and human history. Engineers understand the scaling laws of transformers, but historians understand the scaling laws of bureaucracy, regulation, and social resistance. To bridge this gap, The New York Times Opinion section convened eight leading thinkers—ranging from AI startup founders to skeptical cognitive scientists—and asked them the same set of questions about the next five years.The resulting interactive is a dense, multi-layered map of our near future. It strips away the marketing gloss to reveal genuine uncertainty and profound disagreement among the experts building and studying these systems. This analysis dissects their answers to provide you with a clearer signal amidst the noise. Here is your roadmap for this deep dive:

  • The Consensus vs. The Conflicts: Identifying where experts agree (it’s rare) and where their worldviews fundamentally clash.
  • Sector-Specific Forecasts: A granular look at medicine, coding, transportation, and education, backed by external data.
  • The "Gym vs. Construction Site" Framework: A mental model for deciding when to use AI and when to avoid it.
  • Actionable Strategies: A checklist for students, professionals, and leaders to prepare for 2031.

How the Interactive Works: Eight Thinkers, One Question Set

The strength of the NYT project lies in its parallel structure. Rather than a free-flowing debate, each participant answered the same specific prompts—some requiring open-ended predictions, others demanding a binary "True/False" commitment or a scale rating (Small/Moderate/Large impact). This format forces comparison. It prevents the techno-optimists from only talking about code efficiency and the skeptics from only talking about copyright, compelling every expert to address every domain.The panel represents a carefully balanced spectrum of "p(doom)" (probability of catastrophe) and commercial interest. It includes those building the models, those suing the model builders, and those studying the societal fallout.

Name
Role/Affiliation
Archetype
Melanie Mitchell
Computer scientist and professor at the Santa Fe Institute
The Scientific Skeptic
Yuval Noah Harari
Historian, philosopher and author
The Civilizational Warner
Carl Benedikt Frey
Professor of A.I. and work at the University of Oxford
The Labor Economist
Gary Marcus
Founder of Geometric.AI and author of “Taming Silicon Valley”
The Critical Realist
Nick Frosst
Co-founder of Cohere, an A.I. start-up
The Pragmatic Builder
Ajeya Cotra
A.I. risk assessor at METR, a research nonprofit
The Risk Analyst
Aravind Srinivas
Co-founder and chief executive of Perplexity
The Product Visionary
Helen Toner
Director of Georgetown’s Center for Security and Emerging Technology
The Policy Strategist

To visualize the scope of their discussion, consider the concept map below. The conversation radiates from the central technological core into distinct branches of human activity: legal rights, economic structures, scientific discovery, and personal well-being.

The First Fork in the Road: What They’re Really Betting On

When asked for their "biggest bet" for the next five years, the experts didn't just offer different predictions; they spoke about entirely different realities. Their answers reveal three underlying axes of disagreement that will define the next half-decade.

Axis 1: The Ceiling of Capability (Is it a Tool or a Species?)

On one side, we have the view that AI is a set of distinct, powerful tools that will hit diminishing returns. Melanie Mitchell bets that "A.I. won’t have cured cancer or solved physics," and crucially, that "no one will consider the ability to converse fluently a definitive sign of intelligence." This is a bet on the limits of current architecture—that Large Language Models (LLMs) are mimics, not thinkers.Contrast this with Yuval Noah Harari, who foresees a shift in ontological status: "Within five years, A.I. agents are likely to become legal persons in at least some countries." This suggests a leap where software transitions from an object we use to a subject we negotiate with.

Axis 2: The Texture of Deployment (Invisible vs. Personal)

Nick Frosst of Cohere offers a counter-intuitive bet: "A.I. will become boring in the best way." He envisions a future where AI fades into the infrastructure, powering "everyday tools and spreadsheets" rather than manifesting as a sci-fi robot. It becomes the new electricity—ubiquitous, essential, and largely ignored.Aravind Srinivas of Perplexity bets on a highly visible, personalized relationship: "People want highly personal A.I. assistants... It’s their A.I., not the A.I." This is a vision of fragmentation, where every user has a bespoke digital proxy fighting for their interests, rather than a centralized oracle.

Axis 3: Economic Velocity (Automation vs. Creation)

Ajeya Cotra focuses on the recursive loop of development: "A.I. companies may have substantially automated their own operations... this could make A.I. progress go much faster." This is the "takeoff" scenario where AI builds better AI.Carl Benedikt Frey throws cold water on the economic impact of mere efficiency: "A.I. productivity tools give us cheaper spreadsheets... But the great leaps come from new industries, not faster repetition." His bet implies that unless AI creates new types of goods and services, it will be a marginal productivity booster rather than an industrial revolution.

From Abstract Predictions to Concrete Worlds: Seven Front Lines

The experts were asked to rate the impact (Small, Moderate, Large) and describe the changes in seven specific domains. The consensus? We are entering an era of "Augmentation, not Autonomy."

1. Medicine: Administrative Relief, Not Dr. Robot

The dream of AI diagnosis remains just that—a dream. Nick Frosst notes that while AI will "increase the effectiveness of doctors by reducing their workload," it is "really bad at coming up with entirely new ideas." Gary Marcus adds that real-world application is currently limited to "medical note taking."

The Takeaway

Expect your doctor to spend more time looking at you and less time typing, but don't expect the AI to prescribe a novel cure for a rare disease autonomously. The "human in the loop" remains the bottleneck for liability and trust.

2. Programming: The First True Industrial Revolution of Mind

Programming is one of the domains where the panel expects outsized change—though the interactive’s impact markers are not uniformly "Large" for every participant in the captured text. Yuval Noah Harari calls coding the "ideal playground for A.I." because it involves "few physical and biological constraints."The data supports this massive shift. Carl Benedikt Frey cites a randomized trial showing developers finished tasks 55.8% faster using GitHub Copilot. Furthermore, the 2025 Stack Overflow survey reveals that over 80% of developers are using or plan to use AI tools. However, a critical trust gap persists: the same survey notes that only one-third of developers actually trust the outputs, necessitating rigorous human review.

3. Scientific Research: Acceleration, Not Automation

Aravind Srinivas positions AI as the ultimate retrieval tool: "Humans have always been great at having questions. A.I. will be great at having answers." However, Melanie Mitchell cautions that impact will be slower than hyped because AI cannot "plan experiments" or understand context. The consensus is that AI serves as a "force multiplier" for literature review and data analysis, but the scientific method itself remains a deeply human process.

4. Transportation: Safety in Numbers, Slow on Streets

Helen Toner highlights the stakes: self-driving cars could potentially avert tens of thousands of deaths. The NHTSA estimates 39,345 traffic fatalities occurred in 2024, a number that technology could theoretically decimate. Yet, Toner admits the rollout is "proceeding relatively slowly." Nick Frosst is more excited about the invisible logistics—"predictive maintenance, smart traffic analysis"—than the flashy robotaxis.

5. Education: The Death of the Term Paper

Gary Marcus is blunt: "High schools and colleges are at a loss... term papers are no longer valid." Carl Benedikt Frey suggests a pivot back to the classical tutorial method, emphasizing "in-person... teaching in which students debate, defend their views." The impact here is destructive to current assessment models, forcing a return to oral defense and proctored exams to verify human learning.

6. Mental Health: The Scalability Paradox

Melanie Mitchell captures the duality perfectly: "On the bad side: A.I.-induced psychosis! On the good side, some people will get a lot out of using chatbots as therapists." Harari warns of a "mental health crisis" as we conduct a "psychological experiment... on billions of human guinea pigs." The risk is that we replace human connection with a "good enough" simulation that lacks true empathy or cultural context.

7. Creativity: The Collapse of Cost

Melanie Mitchell argues that AI transforms art "not because A.I. is better at creativity... but because it is a lot cheaper." This is a repricing of mediocrity. Harari takes a darker view, suggesting that any activity based on "finding patterns and breaking patterns" will be subsumed. The future of art likely splits: high-value human authenticity vs. a flood of infinite, cheap synthetic content.

Clearer Thinking: The Misconceptions Worth Killing First

To navigate the next five years, we must first clear the fog of bad narratives. The experts identified several pervasive myths that distort public understanding.

The Misconception
The Expert View
The Unpacking
"AI is Magic/Emergent"
"Technologists... push this narrative but I don’t know how much they really believe it." (Mitchell)
Demystification. Treat AI as engineering, not theology. It's probabilistic math, not a ghost in the machine.
"Humans Control It"
"A.I. isn’t a tool... it is an agent that can make decisions." (Harari)
Agency. We are building systems that act, not just calculate. Control is a design challenge, not a given.
"Blue Collar is Safe"
"A homeowner can photograph a worn washer... receive a parts list, and follow a guide." (Frey)
Skill Compression. AI lowers the barrier to entry for manual repairs, potentially devaluing specialized trade knowledge.
"LLMs = Intelligence"
"Superficial and unreliable... Intelligence is about reasoning flexibly." (Marcus)
Definition. Do not confuse fluency (speaking well) with reasoning (thinking well). They are distinct capabilities.
"Mass Unemployment"
"New technologies sometimes shift the nature of work... but they don’t remove working." (Srinivas)
Adaptation. Jobs aren't fixed slots; they are bundles of tasks. We will do different things, not nothing.
"Quiet = Safe"
"Skeptics wrongly assume that [no immediate impact] disproves... catastrophic risks." (Cotra)
Lag Time. Just because ChatGPT didn't end the world in 2023 doesn't mean safety is solved. Exponential curves look flat at the start.

2030 as True/False: A Useful Constraint, a Dangerous Habit

The interactive asked experts to validate statements about 2030 with a binary "True" or "False." This exercise reveals the fragility of binary thinking.

  • On Unemployment: Gary Marcus voted "True" on significant increases, while Helen Toner voted "False." This split isn't just about optimism; it's about the definition of "unemployment." We may see high employment but falling real wages, or a rise in "gig" work that technically counts as employment.
  • On Daily Use: Toner expects "Most Americans" to use chatbots daily (True), while Srinivas disagrees (False). This highlights a deployment gap: will AI be a conscious destination (like ChatGPT) or an invisible backend (like autocorrect)?

The Better Metric: Instead of asking "Will AI take our jobs?", we should watch for liability shifts (who gets sued when the AI fails?), wage premiums (does AI usage correlate with higher or lower pay?), and task substitution (which 30% of your job is now automated?).

AGI Isn’t a Date—It’s a Question (and Usually the Wrong One)

The question "When will we get Artificial General Intelligence (AGI)?" is the industry's favorite parlor game. The experts largely refused to play by the rules.Harari dismantles the question with a brilliant analogy: Defining AGI as "comparable to human intelligence" is like "defining airplanes as 'able to fly like birds.'" Airplanes fly faster and carry more cargo, but they don't land on tree branches. Similarly, AI will vastly outperform humans in finance and law while remaining "much inferior" in social nuance or navigation.Nick Frosst and Aravind Srinivas reinforce this. Frosst notes we lack "abstraction, self-awareness and transfer learning," making AGI unlikely in ten years. Srinivas dismisses the term entirely as "poorly defined."

A Practical Timeline Framework

Instead of waiting for a "God-Machine," watch for these three horizons:

  1. Capability Milestones: Can it plan a complex itinerary without hallucinating? (Tech barrier)
  2. Liability Thresholds: Will a hospital let it diagnose without a doctor's signature? (Trust barrier)
  3. Social Absorption: How fast do we rewrite laws and norms to accommodate it? (Culture barrier)

What They Compare AI To—and What That Reveals

When asked for a historical analogy, the experts' choices reveal their deepest fears and hopes. These aren't predictions; they are warnings.

The Analogy
The Proponents
The Hidden Meaning
Information Systems
Social Media (Mitchell), The Internet (Frey), Cellphones (Marcus)
Ubiquity & Noise. Expect massive connection but also massive pollution (misinformation, addiction, distraction).
Infrastructure
Steam Engine (Toner), Flight (Frosst), Tool/Encyclopedia (Srinivas)
Utility. It will power society and shrink distances (intellectual or physical), but it requires new safety engineering.
Civilizational Shock
Language/Stone Age (Harari), Agriculture/New Species (Cotra)
Domination. This isn't a tool we use; it's a new environment we must survive. It changes what we are.

Use AI—But Don’t Let It Use You

Helen Toner provides the single most useful mental model for the AI era: The Construction Site vs. The Gym.

  • Construction Site: The goal is the output (a building). Use the crane (AI). It’s efficient, safe, and logical.
  • The Gym: The goal is the process (getting stronger). Do not use the crane. If a robot lifts the weight for you, you have wasted your time.

Building on this, here is a consolidated action plan for the next five years, drawn from the collective advice of the panel.

Actionable Checklist for 2026

For Students & Learners:

  • Treat AI as a Tutor, Not a Ghostwriter: Use it to "question and probe one’s own understanding" (Frey), but do the "heavy lifting" of writing yourself (Toner).
  • Learn the Limits: Understand that these systems hallucinate. Don't use them to do your homework (Mitchell).

For Professionals:

  • Prioritize "Face-to-Face" Skills: As automated text becomes infinite and cheap, human interaction becomes a premium "luxury good" (Frey).
  • Master the "Ask": "Learn how to ask more questions" (Srinivas). The value shifts from retrieving information to querying it effectively.
  • Hedge Your Bets: Do not specialize too narrowly. Cultivate your "Head (intellect), Heart (social), and Hands (motor skills)" (Harari).

For Leaders & Teams:

  • Automate the Boring: Use AI for the "banal use cases" like spreadsheets and logistics (Frosst). That is where the ROI is right now.
  • Trust but Verify: Remember that even in 2025, 46% of developers distrust AI accuracy. Keep a human in the loop for high-stakes decisions.

Conclusion: Turning ‘The Future’ Back Into Choices

The future of AI is often presented as a tidal wave—an inevitable force that will wash over us. But the diversity of opinions in the New York Times interactive proves that the wave is made of choices: design choices, policy choices, and personal choices.As we look toward 2031, five things are clear:

  1. Diffusion over Revolution: The technology will seep into the background (Google Maps style) rather than exploding in a single "Singularity" moment.
  2. Governance is the New Frontier: The technical leaps may slow down, but the legal and ethical battles (like the Times suing Perplexity) will define what products actually reach us.
  3. Education Will Be Turbulent: We are in for a messy transition period where traditional credentials lose value before new ones are established.
  4. Creativity Will Bifurcate: There will be a massive market for cheap, AI-generated content, and a smaller, elite market for "verified human" work.
  5. Responsibility is Yours: No one knows exactly what the world will look like in 10 years (Harari). The only safe strategy is adaptability.

The experts have placed their bets. Now it is time to place yours.

References

Thoughts, reviews, practice, stories, and ideas.

Get the latest essays in your inbox

Weekly highlights across AI and software, SEO playbooks, reviews, and creator notes—concise, practical, and editorial.