🥽 Plastic Labs

Search

      • Launching Honcho: The Personal Identity Platform for AI
      • Agent Identity, Meta Narratives, and the End of Latent Thoughtcrimes
      • The YouSim Q1 Roadmap
      • YouSim DAO -- A DAO for Identity Simulation
      • Xeno Grant -- grants for autonomous agents
      • YouSim Launches Identity Simulation on X
      • YouSim: Explore the Multiverse of Identity
      • Comprehensive Analysis of Design Patterns for REST API SDKs
      • A Simple Honcho Primer
      • Announcing Honcho's Private Beta
      • Introducing Honcho's Dialectic API
      • Achieving SOTA on OpenToM with DSPy
      • Solving The Campfire Problem with Honcho
      • User State is State of the Art
      • Memories for All
      • Honcho: User Context Management for LLM Apps
      • Theory-of-Mind Is All You Need
      • Open-Sourcing Tutor-GPT
      • Founding ML Engineer
      • Working at Plastic
      • Plastic Intern(s)
      • Extrusion 06.24
      • Extrusion 02.24
      • Extrusion 01.24
        • YouSim Disclaimers
        • Context window size doesn't solve personalization
        • There's an enormous space of user identity to model
        • Humans like personalization
        • LLM Metacognition is inference about inference
        • Human-AI chat paradigm hamstrings the space of possibility
        • LLMs excel at theory of mind because they read
        • Loose theory of mind imputations are superior to verbatim response predictions
        • Honcho name lore
        • Machine learning is fixated on task performance
        • Release Notes 05.15.25
        • Release Notes 04.17.25
        • Release Notes 03.05.25
        • Release Notes 01.09.25
        • Release Notes 10.31.24
        • Release Notes 08.15.24
        • Release Notes 08.01.24
        • Release Notes 07.25.24
        • Release Notes 06.23.24
        • Release Notes 06.18.24
        • Release Notes 05.23.24
        • Release Notes 05.16.24
        • Release Notes 05.09.24
        • Release Notes 04.01.24
        • Release Notes 03.21.24
        • Release Notes 03.14.24
        • Release Notes 02.23.24
        • Release Notes 02.15.24
        • Release Notes 02.08.24
        • Release Notes 02.01.24
        • Can AI Models Predict What You'll Say Next? Developing Verifiable Social Rewards
        • Evaluating Steerability in Large Language Models
        • Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models
    Home

    ❯

    research

    ❯

    Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models

    Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models

    Oct 12, 2023 · 1 min read

    • research
    • ml
    • philosophy

    Read on Arxiv.

    Or download here:

    Graph View


    Created with Quartz v4.2.3 © 2025

    • GitHub
    • Discord Community
    • plasticlabs.ai