Notes from the
lab.

Research, engineering write-ups, and field notes from the team. Proper posts are on the way. here’s what’s coming.

Field notes

There is free lunch: Why linear attention is necessary for context scaling

Giving insights as to why linear attention and mamba may be necessary for expressive context scaling.

Coming soon
Research

How Mechanistic Interpretability ought to work

An overview of efficient circuit discovery for post-SAE mechanistic interpretability for high speed knowledge and safety distillation.

Coming soon
Field notes

Small models, real work

A behind-the-scenes look into our work, and the niche research we think view as promising for the future of small but efficient language models

Coming soon

Want the posts as they land? Start an account and we’ll let you know.

Get Early Access