8 Lessons from Inside Silicon Valley’s AI Product Teams
What European founders and Product teams can learn from OpenAI, Gamma, Rippling, Vercel & more.
Hey, I’m Timothe, cofounder of Stellar & based in Paris.
I’ve spent the past years helping 500+ startups in Europe build better product orgs and strategies. Now I’m sharing what I’ve learned (and keep learning) in How They Build. For more: My Youtube Channel (🇫🇷) | My Podcast (🇫🇷) | Follow me on Linkedin.
If you’re not a subscriber, here’s what you’ve been missing:
At the end of October, I flew to San Francisco with one big question in mind:
How are Silicon Valley startups using AI to build their products?
A few months earlier, Maud from OpenAI’s Europe comms team had invited me to DevDay, their annual conference in SF. At first, I wasn’t even sure I could go. At Stellar we were in the middle of a huge rush, and when you’re only two cofounders, every hour counts.
But the idea of attending one of the biggest tech keynotes in the world kept coming back. It felt like the kind of opportunity you don’t get twice.
So I decided to turn this trip into something much bigger: a full documentary on how Silicon Valley is using AI to build products.
4 weeks later, I was on a plane with a new camera in my bag, 150 outreach emails sent, and a pretty wild recording schedule. In 1 week, I recorded more than 20 interviews with teams from OpenAI, DeepMind, Rippling, Gamma, Vercel, AnyShift, Fabi, Revo, Y Combinator and more.
The documentary is now live on YouTube. But I know some of you prefer reading to watching a 1-hour video. So in this edition, I’m breaking down the 8 biggest things I learned about how the best teams in the Bay Area are actually using AI inside their products and organizations.
Lesson 1: The skepticism gap
The 1st thing that hit me when I started these conversations in SF:
the level of skepticism around AI is dramatically lower than what I see in Europe.
In France, I still hear variations of the same sentences:
“AI doesn’t scale.”
“The code quality isn’t good enough.”
“It’s cool for demos, but not for production.”
In the Bay Area, the default mindset is almost the opposite:
“How do I integrate AI into all my workflows?”
At Rippling, Ankur (Head of AI, ex-CTO of SAP) told me that for them, AI is now the default, not the exception. They’ve even published an internal “AI stance” where the baseline expectation is that people use AI in their day-to-day work.
He estimates that roughly:
20% of people are true AI “power users”
80% are “AI-curious”, willing to use it but needing structure and support
His job as Head of AI is less about picking tools and more about creating permission and structure:
a clear top-down message: “By default, you should be using AI in your work.”
guidelines on what’s allowed vs not allowed
internal rituals where the top 20% demo their workflows to everyone else.
Same thing at Google DeepMind. Henri, PM on Gemini’s agent mode, told me that in the Bay Area, people are deeply “bought in” on AI, while in France he still feels much more friction and hesitation.
The result is simple: they’re a full mindset curve ahead.
In Europe, many teams are still debating if AI is good enough.
In SF, teams are already optimizing how to use it everywhere.
Lesson 2: Roles are collapsing
My second big learning: AI is blowing up traditional role boundaries.
In a lot of European teams, roles are still very “boxed”:
PM writes the spec
Designer does the mockups
Engineer writes the code
Everyone stays in their lane
At Rippling it’s already different:
Designers use Cursor to edit the UI directly in code and open PRs
PMs use AI to test technical capabilities, run analyses and prototype ideas without waiting for an engineer
Engineers obviously lean on AI to move much faster on the technical side
Ankur described AI with a metaphor I really liked:
“AI is like a super-competent intern you’ve been assigned.” — Ankur Bhatt, Head of AI at Rippling
You don’t replace yourself. You delegate work to the intern, and you move one level up: more strategy, more judgment, more impact.
This same pattern came up in several other interviews:
At DeepMind, Henri calls it “model empathy” and explains how everyone now touches prompting, prototyping and experimentation — designers, PMs, engineers.
At Gamma, Deeni (Head of AI Product) told me:
“The stack is collapsing. Designers code. PMs are closer to the repo. Engineers are much more product-focused. Prototyping, design, scoping and engineering now happen together, not in a linear hand-off.”
At Revo, Mehdi said AI has turned his team into “a group of multi-disciplinary creators, not a factory line”. Designers ship functional prototypes, PMs write code, engineers model product workflows.
The more you look at these teams, the clearer it becomes: job titles stay, but the borders between roles are melting.
Lesson 3: Ambition as a default setting
One of my favorite conversations was with Olivier Godement, Head of Platform & Business Products at OpenAI (and ex-Stripe).
I asked him a simple question:
“Are US product teams actually better than European teams?”
His answer was very clear:
“Technically? No”. European talent is excellent.
“On ambition? Yes”. That’s where the real gap is.
He explained that in the Bay Area, when a team starts a new product, they rarely aim for +10%. They aim for ×10, ×50, ×100 in impact, users or revenue.
His reasoning is pretty brutal, but true:
“Building a small product is hard. Building a massive product is hard.
You may as well aim for the biggest one.” — Olivier Godement.
In Europe, we still tend to reduce ambition to increase the probability of success. In SF, they keep ambition high and accept that the same effort could create much bigger outcomes.
You feel it in:
how fast they ship
the types of products they attempt
how they talk about their goals
And you feel it particularly strongly in AI. Most of them don’t try to “add an AI feature”. They try to rethink their product and category around AI from day one.
Lesson 4: Product development is now circular
Another big shift: the classic spec → design → dev → test → release cycle is breaking.
At Gamma, Deeni told me something that really stuck:
“With AI, you have to build before you understand. You prototype, then the prototype reveals the real scope.”
Because models are non-deterministic and their capabilities keep changing, you simply can’t define everything upfront in a Google Doc and expect reality to follow.
Their process for a new feature looks more like this:
Open Cursor / Claude and build a prototype in 1–2 hours
Play with it internally
Ask:
Does it behave consistently?
Is it usable?
Is it stable enough to put in front of millions of users?
Only then do they write the “real” scope and refine the UX
Iterate fast in tight loops: prototype → test → adjust → prototype again
Product development becomes circular, fast, and alive, not linear and document-driven.
Olivier at OpenAI described something similar: many PMs on his team don’t even write PRDs anymore. They spend an hour building a prototype, show it to customers, and iterate from there. You can’t fake quality in a prototype — which forces much deeper thinking than a nicely written doc.
Lesson 5: From “one-shot AI” to agentic workflows
Another big change between 2024 and 2025: teams are moving from one-shot AI (“answer this prompt”) to agentic AI (“run this workflow end-to-end”).
At Vercel, Marcos explained how, in just one year, they went from simple “LLM answers” to agents that can chain actions, reason, retry, and correct themselves.
He gave a concrete example:
A teammate needed retention cohorts and a dashboard from a large dataset
Instead of writing SQL and debugging manually, he launched an agent
The agent:
explored the database
wrote a query
ran it, saw the error
diagnosed and fixed it
re-ran it
and eventually produced a full dashboard — in about 20 minutes, alone.
“A year ago, this was impossible. Today, an agent can do what an analyst would have spent two hours doing.” — Marcos Grappeggia, Vercel
This is a mental shift:
We’re not just asking AI to generate text or code anymore.
We’re asking it to execute tasks, end-to-end, like a digital teammate.
Lesson 6: Evaluation is the hardest new skill in PM
If there’s one thing everyone agreed on — from Gamma to Vercel to OpenAI — it’s this:
“Building an AI feature is now the easy part. Evaluating it is the hard part.”
At Gamma, Deeni explained that whenever they release a new AI feature (slide generation, editing tools, image models…), the bulk of the work isn’t shipping the first version. It’s:
defining what “good” looks like
defining what “bad” looks like
finding a way to measure both
accepting that AI output is never fully deterministic
She framed it nicely:
“A model can be more creative but less reliable. More faithful but less inspiring. You gain on one axis, you lose on another.”
At Vercel, Marcos talked about 2 layers of evaluation:
Offline evals (before launch):
test sets
reference prompts
edge cases
guardrails
Online evals (after launch):
usage metrics
acceptance rates
human feedback
automatic detection of quality degradation over time
And there’s a twist: as soon as the underlying model is updated (which can happen every week), your evals can silently break.
For all of them, evaluation is becoming a core PM skill:
frame the intent
define quality
design the measurement system
keep adjusting as models evolve
In Europe we’re still arguing about which AI tool to use. In the Bay Area, they’re already optimizing how to measure and govern the agents they deploy.
Lesson 7. AI is erasing technical borders
Another thing I didn’t expect to see so clearly: AI is quietly making everyone “more full-stack”.
At Fabi, Lei told me he comes from a machine learning background — not a frontend one. But now, with AI:
he can fix frontend bugs
touch backend
and ship components end-to-end without deeply knowing every part of the stack
“AI gives you a full-stack skillset you didn’t have at the start.” — Lei Tang, CTO at Fabi.ai
At AnyShift, Roxane described the other side of the coin:
AI makes it feel like everything is possible, so the real risk is going too far.
You want to:
automate half a pipeline
refactor a huge codebase
build an agent that “does everything”
And you can.
But you might waste days on the last 10% where a manual step would have been enough.
At Fabi, they even learned it the hard way:
Lei once pushed a 1000-line PR generated by AI to fix a single issue
After review, the real fix was… two lines of code
The rest was unnecessary refactoring the AI had decided to do “to be helpful”
They now encode very explicit guardrails directly in the repo and prompts:
“Don’t boil the ocean.”
“Don’t refactor modules unless explicitly asked.”
“Make the minimal change needed to fix the bug.”
Again, the pattern is clear:
AI lets everyone touch more of the product
But without strong constraints and leadership, it also makes over-engineering much cheaper and much more tempting
Lesson 8: The new scarce resource → human attention
My last learning is probably the one that impacted me the most personally.
It comes from Mehdi, founder of Revo, an AI-first OS for product organizations.
He told me:
“In a world where AI can do almost everything, the rare resource is no longer production. It’s human attention.” — Mehdi Djabri, Revo.pm
AI is going to be able to generate more:
insights
analyses
options
possible roadmaps
…than any team can absorb.
So the constraint shifts:
Before: time, execution capacity, engineering headcount
Now / next: what we look at, what we ignore, what we decide to do, and how we align around it
Mehdi’s point is that humans won’t be “replaced”.
They’ll be recentered on:
governance
intent
prioritization
values
meaning
In other words:
The key question stops being “What can AI do?” and becomes “What do we want AI to do for us?”
I left San Francisco with the feeling that this is the real frontier. Not just using AI to go faster, but designing organizations and rituals that protect and focus human attention.
So… what does all this mean for us in Europe?
Spending a week in the Bay with teams like OpenAI, DeepMind, Rippling, Gamma, Vercel, AnyShift, Fabi, Revo, YC and others made one thing obvious:
AI is not a “feature”. It’s becoming a new infrastructure for work, mental, organizational, and technical.
The real gap between Europe and Silicon Valley is not talent.
It’s:
mindset (default ambition, default AI usage)
speed (prototyping, shipping, iterating)
evaluation discipline
willingness to let roles and org charts evolve
And behind all of that, one simple but uncomfortable question:
Are we ready to treat AI as a core part of how we build products,
not just a cosmetic add-on?
If this newsletter makes you think “OK, we need to seriously look at this inside our product & engineering org”, then it did its job.
📺 Watch the full documentary
If you want to see these conversations and examples in context — including DevDay, my chat with Sam Altman, Nicolas (YC), Romain (OpenAI), and all the teams mentioned above — you can watch the full documentary here:
If you enjoyed this breakdown and you want more concrete stories on how teams are using AI to build and scale products, just reply to this email and tell me what you’d like me to dig into next.
Timothe












Your AI question sparks such wonder. What abot ethics?