Welcome to the inaugural edition of Plastic Labs’ “Extrusions,” a monthly prose-form synthesis of what we’ve been chewing on.

This first one will be a standard new year recap/roadmap to get everyone up to speed, but after that, we’ll try to eschew traditional formats.

No one needs another newsletter, so we’ll work to make these worthwhile. Expect them to be densely linked glimpses into the thought-space of our organization. And if you like, you can engage with the ideas directly on GitHub.

2023 Recap

Last year was wild. We started as an edtech company and ended as anything but. There’s a deep dive on some of the conceptual lore in last week’s “Honcho: User Context Management for LLM Apps:“

Plastic Labs was conceived as a research group exploring the intersection of education and emerging technology…with the advent of ChatGPT…we shifted our focus to large language models…we set out to build a non-skeuomorphic, AI-native tutor that put users first…our experimental tutor, Bloom, was remarkably effective—for thousands of users during the 9 months we hosted it for free…

Building a production-grade, user-centric AI application, then giving it nascent theory of mind and metacognition, made it glaringly obvious to us that social cognition in LLMs was both under-explored and under-leveraged.

We pivoted to address this hole in the stack and build the user context management solution agent developers need to truly give their users superpowers. Plastic applied and was accepted to Betaworks’ AI Camp: Augment:

We spent camp in a research cycle, then published a pre-print showing it’s possible to enhance LLM theory of mind ability with predictive coding-inspired metaprompting.

Then it was back to building.

2024 Roadmap

This is the year of Honcho.

Last week we released the…

…first iteration of Honcho, our project to re-define LLM application development through user context management. At this nascent stage, you can think of it as an open-source version of the OpenAI Assistants API. Honcho is a REST API that defines a storage schema to seamlessly manage your application’s data on a per-user basis. It ships with a Python SDK which you can read more about how to use here.

And coming up, you can expect a lot more:

  • Next we’ll drop a fresh paradigm for constructing agent cognitive architectures with users at the center, replete with cookbooks, integrations, and examples

  • After that, we’ve got some dev viz tooling in the works to allow quick grokking of all the inferences and context at play in a conversation, visualization and manipulation of entire agent architectures, and swapping and comparing the performance of custom cognition across the landscape of models

  • Finally, we’ll bundle the most useful of all this into an opinionated offering of managed, hosted services

Keep in Touch

Thanks for reading.

You can find us on X/Twitter, but we’d really like to see you in our Discord 🫡.