While large language models are exceptional at imputing a startling amount from very little user data—an efficiency putting AdTech to shame—the limit here is vaster than most imagine.

Contrast recommender algorithms (which are impressive!) needing mountains of activity data to back into a single preference with the human connectome containing 1400 TB of compressed representation in one cubic millimeter.

LLMs give us access to a new class of this data going beyond tracking the behavioral, toward the semantic. They can distill and grok much ‘softer’ physiological elements, allowing insight into complex mental states like value, belief, intention, aesthetic, desire, history, knowledge, etc.

There’s so much to do here though, that plug-in-your docs/email/activity schemes, user surveys are laughably limited in scope. We need ambient methods running social cognition, like Honcho.

As we asymptotically approach a fuller accounting of individual identity, we can unlock more positive sum application/agent experiences, richer than the exploitation of base desire we’re used to.