On June 12, Alexandr Wang stepped down as Scale’s CEO to chase his most ambitious moonshot yet: building smarter-than-human AI as head of Meta’s new “superintelligence” division. As part of his move, Meta will invest $14.3 billion for a minority stake in Scale AI, but the real prize isn’t his company—it’s Wang himself.
Wang, 28, is expected to bring a sense of urgency to Meta’s AI efforts, which this year have been plagued by delays and underwhelming performance. Once the undisputed leader of open-weight AI, the U.S. tech giant has been overtaken by Chinese rivals like DeepSeek on popular benchmarks. Although Wang, who dropped out of MIT at 19, lacks the academic chops of some of his peers, he offers both insight into the types of data Meta’s rivals use to improve their AI systems, and unrivaled ambition. Google and OpenAI are both reportedly severing deals with Scale AI over the Meta deal. Scale declined to comment, but interim CEO has emphasized that the company will continue to operate independently in a blog post.
[time-brightcove not-tgx=”true”]
Big goals are Wang’s thing. By 24, he’d become the world’s youngest self-made billionaire by building Scale into a major player labeling data for the artificial intelligence industry’s giants. “Ambition shapes reality,” reads one of Scale’s core values—a motto Wang crafted. That drive has earned him admiration from OpenAI CEO Sam Altman, who lived in Wang’s apartment for months during the pandemic.
But his relentless ambition has come with trade-offs. He credits Scale’s success to treating data as a “first-class problem,” but that focus didn’t always extend to the company’s army of over 240,000 contract workers, some of whom have faced delayed, reduced, or canceled payments after completing tasks. Lucy Guo, who co-founded Scale, but left in 2018 following disagreements with Wang, says it was one of their “clashing points.”
“I was like, ‘we need to focus on making sure they get paid out on time,’” while Wang was more concerned with growth, Guo says. Scale AI has said instances of late-payment are exceedingly rare and that it is constantly improving.
The stakes of this growth-at-all-costs mindset are rising. Superintelligent Al “would amount to the most precarious technological development since the nuclear bomb,” according to a policy paper Wang co-authored in March with Eric Schmidt, Google’s former CEO, and Dan Hendrycks, the director of the Center of AI Safety. Wang’s new role at Meta makes him an important decision maker about this technology that leaves no room for error.
TIME spoke to Wang in April, before he stepped down as Scale’s CEO. He discussed his leadership style, how prepared the U.S. is for AGI and AI’s “deficiencies.”
This interview has been condensed and edited for clarity.
Your leadership style has been described as very in-the-weeds. For example, it’s been reported you would take a 1-1 call with every new employee even as headcount reached into the hundreds. How has your view of leadership evolved as Scale has grown?
Leadership is a very multifaceted discipline, right? There’s level one—can you accomplish the things that are right in front of you? Level two is: are the things that you’re doing even the right things? Are you pointing the right direction? And then there’s a lot of the level three stuff, which is probably the most important—what’s the culture of the organization? All that kind of stuff.
I definitely think my approach to leadership is one of very high attention to detail, being very in-the-weeds, being quite focused, instilling a high level of urgency, really trying to ensure that the organization is moving as quickly and as urgently towards the critical problems as possible.
But also layering in, how do you develop a healthy culture? How do you develop an organization where people are put in positions where they’re able to do their best work, and they’re constantly learning and growing within these environments. When you’re pointed at a mission that is larger than life, then you have the ability to accomplish things that are truly great.
Since a trip to China in 2018, you’ve been outspoken about the threat posed by China’s AI ambitions. Now, particularly in the wake of DeepSeek, this view has become a lot more dominant in Washington. Do you have any other takes regarding AI development that might be kind of fringe now, but will become mainstream in five years or so?
I think, the agentic world—one where businesses and governments are increasingly doing more and more of their economic activity with agents; that humans are more and more just feeling sort of like managers and overseers of those agents; where we’re starting to shift and offload more economic activity onto agents. This is certainly the future, and how we, as a society, undergo that transition with minimum disruption is very, very non-trivial.
I think it definitely sounds scary when you talk about it, and I think that’s sort of like an indication that it’s not going to be something that’s very easy to accomplish or very easy to do. My belief is, I think that there’s a number of things that we have to build, that we have to get right, that we have to do, to ensure that that transition is smooth.
I think there’s a lot of excitement and energy put towards this sort of agentic world. And we think it touches every facet of our world. So enterprises will become agentic enterprises. Governments will become agentic governments. Warfare will become agentic warfare. It’s going to deeply cut into everything that we do and there’s a few key pieces, both infrastructure that need to be built, as well as key policy decisions and key decisions [about] how it gets implemented within the economy that are all quite critical.
What’s your assessment of how prepared and how seriously the U.S. government is taking the possibility of “AGI” [artificial general intelligence]?
I think AI is very, very top of mind for the administration, and I think there’s a lot of trying to assess: What is the rate of progress? How quickly are we going to achieve what most people call AGI? Slower timeframe, faster timeframe? In the case where it’s a faster timeframe, what are the right things to repair? I think these are major conversations.
If you go to Vice President JD Vance’s speech from the Paris AI action Summit, he speaks explicitly to this, the concept that the current administration is focused on the American worker, and that they will ensure that AI is beneficial to the American worker.
I think as AI continues to progress—I mean, the industry is moving at a breakneck speed—people will take note and take action.
One job that seems ripe for disruption is data annotation itself. We’ve seen in-house AI models used for the captioning of the dataset OpenAI’s Sora, and at the same time, reasoning models are being trained on synthetic self-play data on defined challenges. Do you think those trends pose a threat of disruption to Scale AI’s data annotation business?
I actually think it’s quite the opposite. If you look at the growth in the AI related jobs around contribution to AI data sets—there’s a lot of words for this, but we call them “contributors,”—it’s grown exponentially over time. There’s a lot of conversation around whether as the models get better does the work go away. The reality is that the work is continuing to grow many fold, year over year and you can see this in our growth.
So my expectation actually is, if you draw a line forward, towards an agentic economy, more people actually end up moving towards doing what we’d currently consider AI data work—that’ll be an increasingly large part of the economy.
Why haven’t we been able to automate AI data work?
Automating AI data work is a little bit of a tautology, because AI data work is meant to make the models better, and so if the models were good at the things they were producing data for, then you wouldn’t need it in the first place. So, fundamentally, AI data is all focused on the areas where the models are deficient. And as AI gets applied into more and more places within the economy, we’re only going to find more deficiencies there.
You can stand back and squint and the AI models seem really smart, but if you actually try to use it to do any of a number of key workflows in your job, you’d realize that’s quite deficient. And so I think that as a society, humanity will never cease to find areas in which these models need to improve and that will drive a continual need for AI data work.
One of Scale’s contributions has been to position itself as a technology company as much as a data company. How have you pulled that off and stood out from the competition?
If you take a big step back, AI progress fundamentally relies on three pillars: data, compute and algorithms. It became very clear that the data was one of the key bottlenecks of this industry. Compute and algorithms were also bottlenecked, but data was sort of right there with them.
I think before Scale, there weren’t companies that treated data as the first-class of a problem it really is. With Scale, one of the things that we’ve really done is treat data with the respect that it deserves. We’ve really sought to understand, “How do we solve this problem in the correct way? How do we solve it in the most tech-forward way?”
Once you have these three pillars, you can build applications on top of the data and the algorithms. And so what we’ve built at Scale is the platform that first, underpins the data pillar for the entire industry. Then we’ve also found that with that pillar, we’re able to build on top, and we’re able to help businesses and governments build and deploy AI applications on top of their incredible wealth of data. I think that’s really what set us apart.