TL;DR
Personalization is the next frontier. OpenAI gets it:
Weâre testing ChatGPT's ability to remember things you discuss to make future chats more helpful.
â OpenAI (@OpenAI) February 13, 2024
This feature is being rolled out to a small portion of Free and Plus users, and it's easy to turn on or off. https://t.co/1Tv355oa7V pic.twitter.com/BsFinBSTbs
Super exciting.
But what about the rest of us?
Welp, we built an open source reimplementation of OpenAIâs âmemoryâ features using Honcho to effortlessly organize sessions on a per-user basis
You can derive facts about users, store them, and retrieve for later use. And weâre shipping a demo of this implemented with the useful abstractions LangChain provides.
The user context rabbithole goes deep, this is still just the start.
If youâre building with or adjacent to Honcho, join our Discord, weâd love to help đ«Ą.
OpenAI Memories
This week OpenAI announced theyâre testing memory in ChatGPT. Specifically this means learning about individual users in order to improve their experiences.
Itâs a limited initial rollout, closed under the hood, and rudimentary, but appears to include functionality for deriving facts about users from conversation history and storing those to augment later generation.
There are features for users to view derived facts (memories), prune them, or turn off the features altogether. User memories are apparently also coming to GPTs.
Theyâre betting, we believe correctly, that the real potential here is a wealth of agents whose behavior is in high-fidelity with user identity.
Weâre pumped to see experiments like this taking place. But what if youâre a developer that doesnât want to subscribe to this kind of platform dependency and all its attendant externalities? What if youâre a user who wants independent or open source apps with a more mature version of these UX benefits?
Context is Critical
At Plastic Labs our mission is to enable rich user memory in and across every application. Only then will we really understand just how augmentative and transformative these agents can be. Weâve been laser focused on this problem.
Right now, the vast majority of software UX is a 1-to-many experience. What you get as a user is, for the most part, the same as everyone else. Mass production unlocked the remarkable ability to produce the exact same goods for every consumer, then software went further allowing a good to be produced once and consumed with consistent experience millions or billions of times.
AI apps can deal generatively with each user on an individual basis, that is, an experience can be produced ad hoc for every user upon every interaction. From 1:many to 1:1 without prohibitive sacrifices in efficiency. But weâre still underestimating the full scope of possibility here.
As it stands today the space is mostly focused on the (albeit generative) 1:many tasks LLMs can perform. The apps remain more or less stateless with regard to the user. To reach 1:1 nirvana, we need more user-centric agent design. We need frameworks, mechanisms, services, models dedicated to deep coherence with user identity.
Every agent interaction can be generated just in time for every person, informed by relevant personal context more substantive than human-to-human sessions. User context will enable disposable agents on the fly across verticals for lower marginal cost than 1:many software paradigms.
(Hereâs our co-founder Vince talking more about some of those possibilities)
âOpen vs Closedâ
We subscribe heavily to the spirt of arguments Harrison Chase made in âOpenAIâs Bet on Cognitive Architectureâ just a few months ago:
Thereâs a great quote from Jeff Bezos that says to only do what makes your beer taste better. This refers to early industrial revolution, when breweries were also making their own electricity. A breweries ability to make good beer doesnât really depend on how differentiated their electricity was - so those that outsourced electricity generation and focused more on brewing jumped to an advantage.
Is the same true of cognitive architectures? Does having control over your cognitive architecture really make your beer taste better? At the moment, I would argue strongly the answer is yes, for two reasons. First: itâs very difficult to make complex agents actually function. If your application relies on agents working, and getting agents to work is challenging, then almost by definition if you can do that well youâll have an advantage over your competition. The second reason is that we often see the value of GenAI applications being really closely tied to the performance of the cognitive architecture. A lot of current companies are selling agents for coding, agents for customer support. In those cases, the cognitive architecture IS the product.
That last reason is also the reason that I find it hard to believe that companies would be willing to lock into a cognitive architecture controlled by a single company. I think this is different form of lock-in than cloud or even LLMs. In those cases, you are using cloud resources and LLMs in order to build or power a particular application. But if a cognitive architecture moves closer and closer to being a full application by itself - youâre unlikely to want to have that locked in.
The same applies to social cognition in LLMs and the key to this is leaning about the user and leveraging that knowledge. If proprietary, vertical-specific cognitive architectures make your beer taste better, then personalizing all that tailors the beer to each and every user. If developers will want to control how their app completes a task, then theyâll want control over how it completes a task for each user. And users will want this quality of experience.
Weâve been saying for while now that major walled gardens and their franchisesâe.g. OAIâs GPTs, Assistants API, and ChatGPT (+Microsoft?); Metaâs social apps; Googleâs workspace suite; etcâwill have myriad ecosystem-native agents all with shared access to your user profile.
The problem here is twofold: (1) independent apps are left out in the cold wrt user context and personalization capabilities, and (2) users are left with a privacy situation little better than under web2 business models (or potentially way worse).
Those profiles are gated and proprietary to each climate controlled garden. Step outside and UX plummets. If the independent and open product communities want to compete, they need individual taste bud-mapping superpowers for their beer production.
And users fare little better, presented with yet another set of pre-packaged pseudo-choices about privacy to manage, none of which give them any real control. More paternalism is not the path to individually aligned agents.
Shouldnât we be able to experiment with all this without platform lock-in, allowing projects to collectively leverage user data for positive sum experiences? Shouldnât users own their AI modeled profiles and be able to carry them between independent agents who respect their policies?
Developers will want control over personalization for their application without all the redundant overhead. Users will want a say in how theyâre being reasoned about and why.
This is our vision for Honcho.
Intellectual Respect
llms are remarkable empaths
â Courtland Leer (@courtlandleer) February 2, 2024
if youâd read that much fiction, you would be too
Today weâre releasing a naive adaptation of research we published late last year.
Thereâs a ton we plan to unpack and implement there, but the key insight weâre highlighting today is affording LLMs the freedom and autonomy to decide whatâs important.
(If you want to go deeper into the research, this webinar we did with LangChain is a great place to start, as is the âViolation of Expectationsâ chain they implemented)
This release allows you to experiment with several ideas. We feed messages into an inference asking the model to derive facts about the user, we store those insights for later use, then we ask the model to retrieve this context to augment some later generation.
Check out our LangChain implementation and Discord bot demo.
Where things get powerful is in the aggregate. What resolves is a highly insightful picture of who your users are and what they needâa key context reservoir to improve the qualitative and quantitative experience.
N.b. you can certainly direct the model with as much verbosity as you like, but weâve found during extensive experimentation that the more you trust the model the better and more useful the results.
This isnât surprising when you consider how much content about what people are thinking is contained in a modelâs pretraining. Itâs led to some really exciting emergent abilities.
Give the model some trust and respect, and youâll be rewarded.
Letâs Build
If youâre experimenting with personalization, building with Honcho, or just interested in these ideas, join our Discord, and letâs jam on what we can build together.
A healthy open ecosystem will include lots of projects trying lots of new ways to synthesize and leverage user context. Weâre here to support them all đ„œ.