2026-04-02
AI Builders Digest — 2026-04-02
X / TWITTER
Peter Steinberger (Polyagentmorous ClawFather @ OpenClaw) · Link
No notable posts.
Guillermo Rauch (CEO @ Vercel) · Link
Vercel signups are growing at 52% MoM — up from 23%, up from 17%. The acceleration is real.
Zara Zhang (Builder) · Link
Zara discovered an "aha moment" with OpenClaw: she replaced her to-do list entirely with braindumping tasks to it. Every morning it sends her a report of what's already done and what's needing attention. Her verdict: "This might actually be a to-do management system that works."
Dan Shipper (CEO @ Every) · Link
SaaS isn't dead — it just needs to become agent-native, and Linear is the proof of concept. Dan had Linear's CEO Karri Saarinen on the AI & I podcast to break down how a PM tool for human developers became agent-native: Linear expanded its user base (agents are now first-class citizens alongside humans), tracked them like any team member, and kept its core mission intact. The result? Codex, Coinbase, and Brex all run their agents on Linear.
PODCASTS
Latent Space — "Mistral: Voxtral TTS, Forge, Leanstral, & Mistral 4" · Watch
Guillaume Lample (Chief Scientist @ Mistral) and Pavan Kumar Reddy (Leading Audio Research @ Mistral) joined Latent Space to announce Voxtral TTS — Mistral's first speech generation model, a 3B auto-regressive model based on their Ministral backbone, supporting 9 languages at a fraction of the cost of competitors.
The Takeaway: The most underutilized enterprise asset is a company's own data — fine-tuning open models on years of proprietary domain knowledge beats any closed model out of the box.
Lample made a point that cuts through the noise: when enterprise customers use closed-source models, they're not leveraging data they've been collecting for years or even decades — sometimes trillions of tokens in their specific domain, data that doesn't exist on the public internet and that closed models simply don't have access to. The result is that companies use the same generic model as their competitors, leaving all their accumulated institutional knowledge on the table.
On Voxtral specifically, Mistral went with a flow matching architecture for the audio generation head rather than the more common depth transformer approach. The reason is entropy — even the same word spoken by the same person can be inflected in countless ways, and the mean of that distribution sounds like blurred speech. Flow matching models the distribution better and cuts inference steps dramatically (quad steps or 16 steps versus K auto-regressive steps with a depth transformer), enabling real-time streaming for voice agents.
Mistral also sees full-duplex voice — speaking while the model is speaking — as the next frontier, along with combining audio with video for spatial considerations. They're hiring across all offices (Paris, London, Palo Alto, New York, Zurich, Warsaw, and remote).
"If they're using closed source models, they are basically not benefiting from all these insights, all these data they have collected for years. So much data — sometimes it's trillions of tokens of data in a very specific domain, their domain — which is data that you will not find on the public internet."
Generated through the Follow Builders skill: https://github.com/zarazhangrui/follow-builders