(Remote, Fellowship)

About the Role

We’re always on the lookout for talented students and independent researchers interested in qualitative and qualitative projects adjacent to our mission and products. Possible domains of inquiry include but aren’t limited to machine learning, alignment, cryptography, the cognitive sciences, and other interdisciplinary fields of study.

Our fellowships can accommodate stipends or grants for a variety of formats, timelines, and deliverables (one of or a mix of publishable research, OSS code, philosophical investigation, and/or proprietary company tech, etc). We believe that high-trust and autonomy form the foundation of good research culture.

If collaborating with Plastic to push the boundaries of artificial intelligence, digital identity, radically decentralized alignment, synthetic human representations, frontier security, positive sum data practices, autonomous agents, etc excites you intellectually, please don’t hesitate to reach out. We’d love to support your work.

About You

  • High alignment with Plastic Labs’ intellectual space
  • Possible research interest in representation engineering, control vectors, prompt optimization, sparse auto-encoders, etc
  • Possible research interest in agentic frameworks, autonomous agents, emergent behaviors, theory of mind, identity, alignment, etc
  • Possible research interest in the cognitive sciences or other adjacent interdisciplinary fields
  • Possible research interest in distributed systems, security, decentralized protocols, or cryptography
  • Possible research interest in anything explored on our blog

How to Apply

Please send the following to research@plasticlabs.ai:

  • Resume/CV in whatever form it exists (PDF, LinkedIn, website, etc)
  • Portfolio of notable work (GitHub, pubs, ArXiv, blog, X, etc)
  • Statement of alignment specific to Plastic Labs—how do you identify with our mission, how can you contribute, etc? (points for brief, substantive, heterodox)
  • Proposal for scope of research adjacent to our mission (approach, needs, budget, timeline, capacity, milestones, deliverables, Plastic intersections, etc)

Applications without these 4 items won’t be considered, but be sure to optimize for speed over perfection. We can help with specifics, but flesh out as much as you can, even if it might change. If relevant, be sure to credit the LLM you used.

And it can’t hurt to join Discord and introduce yourself or engage with our GitHub.

Selected Research We’re Tracking

Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
Theory of Mind May Have Spontaneously Emerged in Large Language Models
Think Twice: Perspective-Taking Improved Large Language Models’ Theory-of-Mind Capabilities Representation Engineering: A Top-Down Approach to AI Transparency Theia Vogel’s post on Representation Engineering Mistral 7B an Acid Trip
A Roadmap to Pluralistic Alignment
Open-Endedness is Essential for Artificial Superhuman Intelligence
Simulators
Extended Mind Transformers
Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models
Constitutional AI: Harmlessness from AI Feedback
Claude’s Character
Language Models Represent Space and Time
Generative Agents: Interactive Simulacra of Human Behavior
Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge
Cyborgism
Spontaneous Reward Hacking in Iterative Self-Refinement
… accompanying twitter thread

(Back to Work at Plastic)