This edition is brought to you by Athyna

Good morning to all new and old readers! Here is your Saturday edition of Faster Than Normal, exploring the stories, ideas, and frameworks of the world’s most prolific people and companies—and how you can apply them to build businesses, wealth, and the most important asset of all: yourself. 

Today, we're covering one of the most underreported talent stories in AI: why post-training is breaking the U.S. PhD pipeline, and why Latin America is sitting on the answer.

If you enjoy this, feel free to forward along to a friend or colleague who might too. First time reading? Sign up here.

What you’ll learn:

  • Why AI has two bottlenecks, and why one is much harder to scale than the other

  • How Latin America quietly built a world-class research pipeline

  • The structural forces converging to create AI's next talent market

  • Lessons on geographic arbitrage, distributed teams, and moving before the competition figures it out

Cheers,

Alex

P.S. Send me feedback on how we can improve. We want to be worthy of your time. I respond to every email.

AI's Quiet Bottleneck

The AI race has two bottlenecks: compute and people. Labs are investing heavily in both. Chips, clusters, energy on one side. PhD-level domain experts on the other.

The compute side has a clear path: more capital, more infrastructure.

The people side is harder to scale. And that's where Latin America comes in.

Specifically: the thousands of domain experts needed to do “post-training work”, a.k.a. the phase where a raw AI model gets refined into something actually useful. The phase where you take a foundation model and turn it into a radiology assistant, a contract law tool, a financial compliance engine.

That work requires a very specific kind of person. Not just an ML engineer. A researcher who understands clinical reasoning, case law, or regulatory frameworks—and can evaluate model outputs, design safety frameworks, and catch the edge cases that matter.

The U.S. supply is not enough. And there's one region sitting on an enormous, largely untapped pool.

The Math Doesn't Work

Labs are investing in solving this. But you can't just buy more domain experts the way you buy more GPUs.

Here's the problem in numbers.

U.S. universities awarded fewer than 3,000 Computer Science PhDs in 2023, according to the National Science Foundation. Physics and Mathematics added roughly 4,000 more. Nearly half are international students, many of whom face visa constraints after graduation.

Meanwhile, McKinsey research shows the number of U.S. workers in roles requiring explicit AI fluency grew sevenfold between 2023 and 2025. From 1 million to 7 million.

In two years.

LinkedIn's 2026 Jobs on the Rise report makes it concrete: AI Engineer, AI Consultant, and AI/ML Researcher now rank among the five fastest-growing roles in the United States. Alongside them: Data Annotators—the frontline workers who label and review data to train models.

Demand is growing. Supply isn't.

The bottleneck is structural. And it's getting worse.

Why Post-Training Changed Everything

For years, the AI talent conversation was about machine learning engineers building foundation models. Capital-intensive work. Concentrated in a handful of frontier labs.

Post-training changed the equation.

Pre-training is a GPU problem. Massive clusters running overnight. You can throw money at it.

Post-training is a people problem. Thousands of hours of expert evaluation, safety testing, and domain adaptation. Done by people who genuinely understand the domains where AI is being deployed.

It's iterative. A domain expert identifies an edge case. An engineer adjusts the approach. The expert validates the output. Repeat. Hundreds of times. Per model.

You can't automate that. You need the humans.

And the humans—at least the U.S.-based ones—aren't available at the scale the industry needs.

Latin America's Untapped Edge

While AI labs compete for the same constrained pool of domestic researchers, Latin America has been quietly building something remarkable.

Brazil graduates ~20,000 PhDs per year. Mexico, ~3,000. Argentina, ~2,500. Chile, ~1,000. Across all disciplines—with thousands in STEM: computer science, mathematics, physics, engineering.

These aren't insular programs. Researchers from USP, Unicamp, UNAM, and CONICET regularly publish in NatureIEEE, and ACM alongside colleagues from MIT, Stanford, Oxford, and Max Planck.

The quality is there.

And the economics? Hiring at 40–60% lower cost than comparable U.S. roles. Not because of lower quality. Because of geography and the lower cost of living.

Then there's the timezone factor, often overlooked, genuinely important at scale. Brazil, Mexico, and Argentina sit 1 to 4 hours from U.S. Eastern Time. A researcher in São Paulo starts their day as New York wakes up. A team in Buenos Aires overlaps four hours daily with San Francisco.

Post-training is an iterative loop. When problems can be resolved in hours instead of days, development velocity compounds.

Three Forces Converging Right Now

This isn't just a talent arbitrage story. Three structural forces are hitting simultaneously.

Demand. Post-training has become a primary bottleneck in AI deployment. It requires PhD-level expertise the U.S. can't supply at scale.

Supply. Latin America produces researchers precisely aligned with the domain and quantitative backgrounds post-training demands.

Infrastructure. Remote work—normalized during the pandemic—removed the geographic constraints that once required researchers to physically relocate to contribute to cutting-edge AI development.

Three forces. One market mismatch. One obvious solution.

What This Means for You

The AI industry spent two years learning that compute alone doesn't solve intelligence.

The next lesson is coming into focus: intelligence without distributed expertise doesn't solve real-world problems either.

Forward-thinking organizations are already exploring a different model. Distributed research teams that combine U.S. leadership with Latin American depth. Senior researchers directing work. PhD-level contributors executing.

The structural forces are in place. What's missing is execution.

For AI labs, research-driven companies, and organizations deploying specialized models: the question is no longer whether to build distributed teams. It's whether to move while the talent market is still underutilized. Or wait until the competition figures it out.

Lessons

Lesson 1: The bottleneck shift is always the opportunity. AI's people bottleneck is growing faster than the industry can solve it domestically. That's where the leverage is now. When a constraint emerges, whoever finds the solution first captures disproportionate value. Pay attention to where talent is scarce, not just where capital is flowing.

Lesson 2: Geographic arbitrage is still massively underused. 40–60% cost savings for equivalent research quality isn't a minor efficiency gain—it's a competitive advantage that compounds over time. The companies that build global research teams now will have structural cost and scale advantages that latecomers can't easily close.

Lesson 3: Time zones are an underrated moat. Async teams are fine for writing code. Post-training is an iterative loop: expert identifies edge case, engineer adjusts, expert validates. LatAm's overlap with U.S. hours isn't incidental. It's a feature.

Lesson 4: Move before the market prices it in. The LatAm AI talent pool is underutilized precisely because it's not yet well-known. That window closes. The companies building relationships with researchers in São Paulo, Buenos Aires, and Mexico City today will have first-mover advantages in hiring, pricing, and relationships. Waiting is a strategy—just not a good one.

Lesson 5: The next breakthrough may not come from where you expect. Silicon Valley has historically concentrated AI talent through proximity and prestige. Remote work broke that model. The next major contribution to AI might come from a PhD in Buenos Aires working on a model evaluation framework that nobody in San Francisco thought to design. Geography used to constrain brilliance. It doesn't anymore.

Further Reading

That’s all for today, folks. As always, please give me your feedback. Which section is your favourite? What do you want to see more or less of? Other suggestions? Please let me know.

Have a wonderful rest of week, all.

Recommendation Zone

The fastest way to access LatAm's AI research talent

If this piece resonated, here's how to act on it.

Athyna Intelligence connects AI labs and research-driven companies with PhD- and Master's-level researchers across Brazil, Mexico, Argentina, and beyond. Pre-vetted for technical depth, English fluency, and readiness to collaborate with global teams.

They've spent years building the infrastructure to source, vet, and deploy highly skilled professionals internationally. Before AI post-training was a mainstream challenge, Athyna was already developing the matching platform to solve it.

The result: faster access to scarce expertise, U.S.-aligned collaboration, and research capacity that scales at 40–60% lower cost than comparable domestic hires.

Alex Brogan

Offshore Talent: Where to find the best offshore talent. Powered by Athyna.

Why Faster Than Normal? Our mission is to be a friend to the ambitious, a mentor to the becoming, and a partner to the bold. We achieve this by sharing the stories, ideas, and frameworks of the world's most prolific people and companies—and how you can apply them to build businesses, wealth, and the most important asset of all: yourself.

Faster Than Normal is a ‘state' of being’ rather than an outcome. Outlier performance requires continuous, compounded improvement. We’re your partner on this journey.

Send us your feedback and help us continuously improve our content and achieve our mission. We want to hear from you and respond to everyone.

Interested in reaching Founders, Operators, and Investors like you? To become a Faster Than Normal partner, apply here.

Keep Reading