June, 2025
San Francisco

June, 2025
San Francisco

3

3

FIRESIDE CHAT

Emmett Shear, CEO of Softmax

Emmett Shear (Twitch co-founder and former interim CEO of OpenAI) is currently working on the alignment problem as CEO & co-founder of AI lab Softmax. His approach? Scaling up organic alignment. Drawing on analogies like cells cooperating in a body, Emmett argues that true alignment is not about forcing individuals to align to fixed values—it emerges when we align with each other and see others as part of our same collective.

FIRESIDE CHAT

Emmett Shear, CEO of Softmax

Emmett Shear (Twitch co-founder and former interim CEO of OpenAI) is currently working on the alignment problem as CEO & co-founder of AI lab Softmax. His approach? Scaling up organic alignment. Drawing on analogies like cells cooperating in a body, Emmett argues that true alignment is not about forcing individuals to align to fixed values—it emerges when we align with each other and see others as part of our same collective.

PRESENTATION

Benjamin Bolte, CEO of K-Scale Labs, Jan Liphardt, founder of OpenMind

Benjamin Bolte, CEO of K-Scale Labs is building fully open-source humanoid robots with a hacker mentality. Jan Liphardt, founder of OpenMind, is building an adaptable (modular) SW to different robot form factors, where a robot with one form factor contributes to other form factors.

PRESENTATION

Benjamin Bolte, CEO of K-Scale Labs, Jan Liphardt, founder of OpenMind

Benjamin Bolte, CEO of K-Scale Labs is building fully open-source humanoid robots with a hacker mentality. Jan Liphardt, founder of OpenMind, is building an adaptable (modular) SW to different robot form factors, where a robot with one form factor contributes to other form factors.

PRESENTATION

Trevor McCourt, CTO of Extropic

Jacob Buckman calls expanded context length the next major breakthrough in AI model architecture because transformers’ costly quadratic scaling limits how much data they retain. He proposes “power attention,” a sub-quadratic approach that he expects most will adopt.

PRESENTATION

Trevor McCourt, CTO of Extropic

Jacob Buckman calls expanded context length the next major breakthrough in AI model architecture because transformers’ costly quadratic scaling limits how much data they retain. He proposes “power attention,” a sub-quadratic approach that he expects most will adopt.

PRESENTATION

Chip Huyen, author of AI Engineering & Designing ML Systems

Chip speaks to common product traps founders and engineers fall into: 1. Use generative models when a simpler approach is better (e.g. a classifier to route user requests in a chatbot) 2. Optimize the wrong UX axis (e.g. in a note-taking app, users don’t care about meeting summary length. They want action items).

PRESENTATION

Chip Huyen, author of AI Engineering & Designing ML Systems

Chip speaks to common product traps founders and engineers fall into: 1. Use generative models when a simpler approach is better (e.g. a classifier to route user requests in a chatbot) 2. Optimize the wrong UX axis (e.g. in a note-taking app, users don’t care about meeting summary length. They want action items).

PRESENTATION

Justus Mattern, Research Engineer, Prime Intellect

Justus Mattern, shows how they're scaling RL to open AGI: 1. With RL, LLMs iteratively generates its own training data. Since inference is parallelizable; each GPU can host a model replica and sample independently. 2. The bottleneck in RL is the availability of RL environments. PI launched an open-source RL environment development, Environments Hub, to address this.

PRESENTATION

Justus Mattern, Research Engineer, Prime Intellect

Justus Mattern, shows how they're scaling RL to open AGI: 1. With RL, LLMs iteratively generates its own training data. Since inference is parallelizable; each GPU can host a model replica and sample independently. 2. The bottleneck in RL is the availability of RL environments. PI launched an open-source RL environment development, Environments Hub, to address this.

PRESENTATION

Divya Siddarth, Executive Director at The Collective Intelligence Project

Divya is making sure the human collective have agency to shape the future. Laid out a roadmap to democratic AI, initiated dialogues in 70+ countries to collect public input on what frontier models should look like, and actively connecting the open source and democracy movements.

PRESENTATION

Divya Siddarth, Executive Director at The Collective Intelligence Project

Divya is making sure the human collective have agency to shape the future. Laid out a roadmap to democratic AI, initiated dialogues in 70+ countries to collect public input on what frontier models should look like, and actively connecting the open source and democracy movements.

PRESENTATION

Jacob Buckman, CEO at Manifest AI

Today's LMs have huge long-term memory (parameters and training data) but relatively very small working memory (context). With standard attention architecture, extending the working memory makes the model’s internal state grow rapidly, which weakens in-context learning beyond a few hundred examples. To solve this, Jacob with team at Manifest AI created Power Retention as an alternative architecture to transformers.

PRESENTATION

Jacob Buckman, CEO at Manifest AI

Today's LMs have huge long-term memory (parameters and training data) but relatively very small working memory (context). With standard attention architecture, extending the working memory makes the model’s internal state grow rapidly, which weakens in-context learning beyond a few hundred examples. To solve this, Jacob with team at Manifest AI created Power Retention as an alternative architecture to transformers.

PRESENTATION

Bilge Acun, research scientist at Meta AI

Bilge is making models more efficient and sustainable by optimizing for carbon footprint as a metric in the design stage. Her recent project, CatTransformers, is a model and architecture search framework that takes a pre-trained transformer model, prunes it across different dimensions such as number of layers and attention heads, and fits it to an optimal hardware architecture with parameters like number of cores and memory.

PRESENTATION

Bilge Acun, research scientist at Meta AI

Bilge is making models more efficient and sustainable by optimizing for carbon footprint as a metric in the design stage. Her recent project, CatTransformers, is a model and architecture search framework that takes a pre-trained transformer model, prunes it across different dimensions such as number of layers and attention heads, and fits it to an optimal hardware architecture with parameters like number of cores and memory.

IN PARTNERSHIP WITH

© 2025 Copyright
All Rights Reserved

IN PARTNERSHIP WITH

© 2025 Copyright
All Rights Reserved