A dozen or so young men and women, eyes obscured by VR headsets, shuffle around a faux kitchen inside a tech company’s Silicon Valley headquarters. Their arms are bent at the elbows, palms facing down. One pilot stops to pick up a bottle of hot sauce from a counter, hinging at the waist, making sure to keep her hands in view of the camera on her headset at all times. She and her colleagues wear T-shirts emblazoned with the word HUMAN.
[time-brightcove not-tgx=”true”]
Meters away, two humanoid robots, with bulbous joints and expressionless plastic domes for faces, stand at a desk. In front of each is a crumpled towel; to its right, a basket. In slow movements, each gunmetal gray robot grabs a towel by its corners, flattens it out, folds it twice, and deposits it into the basket. More often than not, the towel catches on the edge of the basket and the robot freezes. Then an engineer steps in and returns the towel to a crumpled heap, and the sequence begins again.
This story is part of TIME Best Inventions of 2025.
This was the scene inside the Silicon Valley headquarters of Figure AI on an August morning this year. The three-year-old startup was in a sprint ahead of the October announcement of its next robot, the Figure 03, which was undergoing top-secret training when TIME visited. The robots folding towels were the company’s previous model, the Figure 02, operating the same software that the Figure 03 will use. Since earlier this year, some Figure 02s have been working daily 10-hour shifts lifting parts at a BMW factory, the company says. But most of them remain here on Figure’s campus, a collection of airy San Jose lofts, busy—along with the headsetted human “pilots”—collecting data that is being used to train the new 03 model. The Figure 03 will be far different to its predecessor, its makers say. They hope that it will soon become the first robot suitable for carrying out domestic chores in the home, as well as all kinds of manual labor. Figure claims the 03 will be its first mass-producible humanoid, and that it will eventually even work on its own production line. The launch will be a critical moment for this startup of 360 people, which in September announced it had secured $1 billion in investment at a valuation of $39 billion, and which counts Nvidia, Jeff Bezos, OpenAI, and Microsoft among its investors. (Salesforce, whose CEO and co-founder Marc Benioff owns TIME, was also announced as an investor in September.)
Humans have been making robots for decades. Moving robotic shelves sort packages for Amazon, robotic arms assemble cars across the auto industry, and entire factories in China operate with the lights out because they employ no humans at all. But for the most part, these robots look markedly unhuman. They are built for tightly scoped tasks, and tend to operate in controlled environments, segregated from their human peers. Achieving “general robotics”—building a humanoid robot that can navigate the unpredictabilities of the world with the same fluidity as a person—has for decades remained a distant dream.
Until now. Today dozens of companies are racing to be the first to create a viable humanoid robot. Figure faces stiff competition from Tesla’s Optimus division and China’s Unitree, among many others. The size of the opportunity they are chasing is roughly $40 trillion, according to Figure AI’s CEO Brett Adcock, who arrives at that number by calculating the value of all the labor in the global economy. “In the next 10 years—maybe under 10 years—the biggest company in the world will be a humanoid robot company,” Adcock tells TIME. “Every home will have a humanoid,” he says, which will do domestic chores from emptying the dishwasher to making the bed. “We think there will be billions in the workforce, doing work every day. They’ll be in healthcare, and then ultimately over time they’ll be in space too, helping build colonies in space and on different planets.” General robotics, he proclaimed in July, would be solvable within 24 months. Perhaps 18.
Of course, tech CEOs are known for making exaggerated claims. But Adcock’s optimism is at least partially grounded in real progress. In the past three years, computer scientists have developed AI that for the first time can do something that approaches “understanding” our messy world. These neural networks can take an image or video and tell you what appears to be going on. They can follow complex, vague, or open-ended instructions. They can simulate reasoning. These advancements in AI have significantly narrowed the once-fearsome challenge of developing a machine that can cope with the unpredictability of earthly affairs. To boot, the hundreds of billions of dollars sloshing around the AI industry has left investors with plenty of cash to back up their optimism.
Figure stands out among its rivals because it is overtly targeting putting robots in the home—a domain that many of its competitors believe is still many years away. As the halting demonstration of towel folding during TIME’s August visit showed, the challenges remain very real. Another demo, intended to show robots loading laundry into a washer-dryer, meets a similar hitch twice in a row, when a Figure 02 drops a piece of laundry on the floor and is unable to pick it up. (On the third try, it successfully loads the washer without dropping anything.) At launch, the Figure 03 won’t actually be ready for domestic use. “We want the robot to be able to do most things in your home, autonomously, all day,” Adcock says. “We’re not there yet. We think we can get there in 2026, but it’s a big push.” Before that, it will be made available to a select list of Figure’s partners for testing. Nevertheless, Figure is focusing much of the marketing around the 03’s launch on domestic settings. In September, TIME witnessed the Figure 03 successfully load items into a dishwasher and clear clutter from a table. It had more trouble when faced with folding T-shirts.
Adcock acknowledges the limitations of even his newest robot, but insists they are easily solvable. He says Figure’s internal neural network, called Helix, is capable of learning new tasks with staggeringly small amounts of data; its towel-folding abilities have come from only 80 hours of video footage. That’s where the pilots come in. Their job is to film themselves carrying out tasks that Figure wants its robots to master—like interacting with kitchen environments, folding laundry, and carrying objects around. This argument, that data is the only missing piece, and that Figure must now go and get it, makes some sense: large language models proved that “scaling” neural networks on masses of data could yield miraculous capability improvements across the board. But its corollary—that major performance increases are just around the corner—is also a convenient way for this company with huge costs, an unproven product, and no publicly-disclosed revenue to justify its soaring valuation.
Although the Figure 03 will not be ready for home use upon its release, one thing is clear. The billions of dollars pouring into the robotics industry are making humanoid robots rapidly better—and are probably bringing forward the day that they begin to enter the home and the workforce en masse. Even if this day remains many years away, it will be the harbinger of a societal shock greater than any in living memory.
It is an ordinary sight at Figure’s offices to see Figure 02s wandering past conference rooms, or venturing out into the parking lot with supervision. But when TIME first visits in August, the Figure 03 is tightly under wraps behind a set of locked security doors. I catch my first glimpse of the new robot—or at least, a disassembled version of it—laid out on what looks like an operating table covered in some 30 whirring actuators, wires, and circuit boards. Among the Figure 03’s improvements over the Figure 02: its moving joints are smaller and stronger; its components are 90% cheaper to manufacture; its hands are slimmer, with tactile finger pads and a camera in the palm for delicate tasks; and its battery is less prone to catching fire. When I finally see a fully assembled version of the 5-foot-6 robot, its sleek figure makes plain that it is a lighter machine overall—a feature designers say is intended, in part, to make it less intimidating.
Although the launch is in eight weeks, the Figure 03 is not completely ready yet. Besides a brief demo showing the new robot undergoing a wobbly calibration, the action comes exclusively from older Figure 02s. Executives assure me these robots are running the same improved Helix software as the forthcoming 03, and are performing capabilities that they intend the new robot will have upon its launch. I later learn that the Figure 03 was only completed in late September, a week before TIME’s video team turned up to shoot it at Adcock’s San Francisco Bay Area home.
What I am given is a demonstration of what executives say will be a new “memory” feature that ships with the Figure 03. (An android butler, of course, is of no use if it cannot remember where to put your laundry.) A Figure 02 stands at a table, upon which lies a white cap, a gray cap, and a blue cap. Corey Lynch, Figure’s head of AI, performs a version of the test of object permanence given to babies: he places a set of keys under the blue cap, then switches the positions of the hats on the table. An engineer types: “Show me my keys.” The robot picks up the correct hat, revealing the keys. It’s a demonstration of what Lynch says is an essential capability for domestic robots.
In an audio studio, a limbless Figure 03 demonstrates another new capability—responding to voice prompts, rather than text—with a prank: engineers invite me to ask the robot a question, only for it to respond lucidly in my own voice, which they have apparently cloned using AI. It’s an impressive but profoundly unsettling experience.
It’s clear that innovation is proceeding quickly at Figure. Believe it or not, folding a towel even some of the time is seen as an impressive achievement in today’s robotics industry, given how many unique forms crumpled fabric can take. Even more striking is that most of the capabilities I’m being shown aren’t the result of separate programs that are individually loaded onto the robot, according to the company. They are instead all being learned by the same Helix neural network. It is structured a bit like our own cognition. One part, “system one,” is comparable to our nervous system. “System two” is more like our logical brain. It includes an open-source AI reasoning model trained on text and imagery from the internet, and helps the robot understand the scene and decide what actions to take. Then it sends messages to system one, which translates those directives into instructions that tell the dozens of actuators in the robot exactly what to do, up to 200 times per second.
A third neural network, called “system zero,” handles base reflexes like balance. I’m led to a large square of soft flooring, where two Figure 02 robots stand connected to a gantry. This demonstration is intended to show off advances in the robots’ stability. These improvements to system zero reflexes are described by executives as an essential safety feature, given that a falling robot could cause injury or property damage, or even a fire. The engineers at this station invite me to push a robot to the floor. The robot easily resists my first ginger shove. I try again, harder, and it holds its ground. Then I throw most of my weight against it, to no avail. The engineers explain that the robot’s balance and locomotion have been trained in a simulated environment with slopes, obstacles, and interfering forces, where it has been run for hundreds of thousands of virtual hours. In this way, the robot’s system zero has learned by trial and error to walk and stay upright with high accuracy. (Later, when I speak to Adcock, he jokes ominously that the robot might remember my assaults.)
Unfortunately for Figure, there aren’t yet simulations that mimic the real world with enough fidelity to be useful for training more complex tasks—hence the ongoing need for human pilots. The company will spend much of the new $1 billion on its balance sheet hiring humans to collect first-person video data, Adcock says. Figure is currently filling an entire loft on its San Jose campus with varied kitchen and factory layouts, and will soon also begin collecting data from inside residential and business properties owned by its investor Brookfield. In this way, Helix will soon go from being trained on thousands of hours of video data to millions.
Some roboticists aren’t convinced by this strategy. “To think we can teach dexterity to a machine without … being able to measure touch sensations … is probably dumb [and] an expensive mistake,” wrote Rodney Brooks, the co-founder of Roomba maker iRobot, in a September blog post. “Simply collecting visual data … alone can’t possibly be enough to infer how to be dexterous.”
Whatever the answer to this open question, Figure will soon find it out. If it works, it’s possible that collecting data to train robots will become an increasingly large segment of the labor market, just as it already has for the many thousands of digital workers who train cloud-based AI models. But those jobs might not last long; once a skill has been learned, it can be loaded onto Figure’s entire fleet of robots forever. And the company may eventually collect increasing portions of its training data not from humans but from simulations, or even its own growing fleet of robots.
The robotics industry, Adcock believes, is likely to be a natural monopoly. The bigger your fleet of robots, the cheaper they become to produce and the more data you can collect, which means the faster your robots can improve, creating a natural flywheel effect where the industry’s early leader can begin to distance its rivals. “The first mover gets a cheaper and smarter [robot] over time,” Adcock says. “And I think that becomes very, very, very difficult to catch.”
Adcock, 39, is a serial entrepreneur. His first company was a talent marketplace called Vettery, which he eventually sold for $110 million. His second, Archer Aviation, builds electric vertical takeoff and landing aircraft, and went public in 2021 at a valuation of $2.7 billion. A side project, Cover, makes AI to detect concealed weapons. A handful of Adcock’s colleagues have followed him from company to company, citing his work ethic and vision. “If people are working late, he goes home, puts the kids to bed, has dinner with his family, and then comes back,” says Lee Randaccio, the vice president of growth at Figure, who worked with him at Vettery and Archer.
Adcock says he elected to leave Archer in 2022 to found Figure AI, after becoming convinced that humanoid robots were the future. But the circumstances of Adcock’s departure from Archer are disputed. A spokesperson for Archer, where Adcock was co-CEO, says his departure followed a decision by the board, without elaborating further. A Figure spokesperson says Adcock’s resignation was “entirely his own voluntary decision.”
That same year, Adcock met with two trusted lieutenants in his basement. He described to them an idea to start a humanoid robotics company. “He was like: I think it could be the biggest market in the world,” says Logan Berkowitz, Figure’s vice president of business operations, who attended the meeting along with Randaccio, to whom he is married. “He was looking at the labor statistics and I think his mind was exploding,” Berkowitz says. “Like, ‘holy cow, if we can tap this market, this is a trillion dollar company.’”
Within a year of Figure’s founding, the company had built a hulking silver robot with exposed wires—the Figure 01. A year after that, they had built the sleeker Figure 02. From the beginning, the company paid close attention to producing glossy videos of its robots to be shared with prospective investors and on social media. An early video shows the Figure 01 walking by itself, accompanied by an electronic dance music soundtrack. In June of this year, the company uploaded an hour-long unedited video of the Figure 02 sorting packages on a conveyer belt. And a week before my visit, they posted a video of the Figure 02 successfully folding five towels in a row. Videos are commonly used in the robotics industry to generate hype — but they are less useful as a barometer of a robot’s abilities. “One thing you learn in robotics,” says Hans Peter Brondmo, a former vice president at Google’s Everyday Robot project, “is to never trust a YouTube video.” Adcock is scornful of competitors, some of whom he says secretly use remotely controlled robots in demonstrations. Figure, he says, never does that.
Figure signed its first customer, BMW, in 2024, and began putting its robots on the factory floor for the first time. Starting in April, the companies expanded that partnership, with multiple Figure 02 robots “working 10 hours a day, five days a week” at BMW’s Spartanburg factory, a spokesperson for the carmaker said in a statement. Both Figure and BMW declined to specify exactly how many robots are now working at the factory, or to share any financial details of their relationship. “On the line, the robot picks up parts and places them onto fixtures during live production,” the BMW spokesperson said. “The parts loaded by the robot are incorporated into the BMW X3, which is assembled at the plant. We are pleased with our relationship with Figure and the progress that has been made since we started full-time on the line in April.”
A central truth about today’s AI is that it is unpredictable. The precise thing that makes neural networks so powerful—their ability to learn not from instructions but from large quantities of data—is also what makes them so difficult to control. In chatbots, this results in the ability of “jailbroken” models to create terrorist manuals, for example. But by and large the levers available to a text model to perpetrate harm are limited. The same cannot be said for robots. A domestic robot has access to your kitchen knives. A hallucination by a chatbot is annoying; a hallucination by a robot could be deadly.
Adcock professes to take safety seriously. He is testing a Figure 03 in his own home, where he has young children. (There the machine is subject to “hardcore babysitting,” he says.) Figure, he adds, has an internal version of Isaac Asimov’s famous three laws of robotics, popularized in the short story collection I, Robot:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Adcock declines to share details of Figure’s three laws. Those are proprietary, he says. Getting safety right is the main barrier between Figure and the trillion-dollar opportunity of general robotics. He speaks about Figure’s safety systems as if they are not fully effective yet. “Getting the robot to be extremely safe in the home long-term is a really hard problem, maybe one of the hardest problems we face,” he says. In fact, it is a cascading set of problems: making sure the robot doesn’t cause harm accidentally; making sure the “reasoning” model in its system-two brain makes safe decisions; and ensuring that when the system-one nervous system must bypass system two to respond quickly to some environmental change, these reflexive actions are not also unsafe. Then there’s making sure that the robot’s memory, which includes all the most intimate details of your home life, is safe from hackers. If it’s any consolation, the Figure 03 is at least designed to not be strong enough to be physically harmful. “You’ll be able to overpower all the robots,” Adcock says. “And outrun them.”
And that’s before you get to the question of data collection. Because Figure needs more data to train its robots, the company plans to eventually use data from people’s home robots to train future models, Adcock says. Figure has “every intention to do the right thing with everybody’s data,” including scrubbing personal information from it before using it for training, he says when pressed about the privacy implications of this stance. Asked for further details, a Figure spokesperson says the company intends to detect, blur, and replace personal information in data from inside the home, similar to how Google Streetview blurs faces.
All of which might make putting a Figure robot in one’s home a daunting proposition. Competitors such as Texas-based Apptronik, which works with Google on integrating the tech giant’s Gemini AI model into humanoid robots, says it is first targeting industrial use cases—leaving the home as a goal to be tackled in years to come, once the safety and reliability challenges are solved. “I want a robot in my house as much as anyone does,” says Jeff Cardenas, Apptronik’s CEO. “I’m tired of folding my laundry. But there’s a lot of things that we want to solve and make sure we get right before that can scale.”
It’s clear that domestic robots have a long way to go. But if progress in robotics proceeds anywhere close to the speed of the wider AI industry, that wide distance may nevertheless be traversed in a short space of time. In 2019, the predecessor to ChatGPT was barely able to string a coherent sentence together; just three years later, fed with more data and computing power, ChatGPT became a world sensation. Two years after that, AI is beating humans at math competitions and being blamed for swelling youth unemployment.
If there’s even a small chance that this audacious company—or its competitors—can succeed in its goal, the implications would be nothing less than world-changing. With the global population expected to peak this century before heading into decline, the arrival of robots might allow the world economy to continue growing even as human labor becomes less abundant. Robotic labor could cause the cost of goods and services to plummet, potentially enabling an improved quality of life for all. If the arrival of the domestic robot is anything like the arrival of the washing machine and the dishwasher, it might be a boon for women, on whom the majority of domestic burdens still fall. And as the global population shifts elderly, robots might play a crucial role in helping people to grow old with dignity.
But liberating humans from work would also mean liberating them from their paychecks. Robots can perform labor for longer than eight hours per day, and they don’t demand breaks, rights, or wages. Populations would lose their bargaining power, and robot police and armies could turbocharge forms of coercive control. In Adcock’s imagined future, the AI and robotics revolutions will need to be accompanied by something like a universal basic income. But there’s also an alternate future—perhaps one that more resembles the political economy of the present—where tech trillionaires lock in their new power, sideline the state as a political force, and usher in a world where most people are trapped in a permanent underclass.
Both futures, today, remain possible. “This technology has a tremendous potential to provide value, and provide good, but if it just makes large corporations richer, that’s not going to be a good outcome,” says Brondmo. “I believe [robotics] is less of a technological challenge, and more of a policy challenge. We need to fundamentally rethink the social contract.”
Asked about the potential for his inventions to cause suffering rather than liberation, Adcock counters with optimism. “When you have automation systems that can basically do everything a human can, and that will ultimately build themselves and self-replicate, I think the cost of goods and services collapses to a point where it raises wealth for everybody,” he says. “This new age of technology is going to be very prosperous for everybody in the world.”
On a sunny morning in September, Adcock welcomes a TIME video and photo team into his weekend home in the Bay Area of California. The Figure 03 is ready now, and five of the robots take shifts demonstrating their capabilities on camera while Adcock and his team play croquet on the lawn outside. One robot puts dishes into a dishwasher with an impressive degree of accuracy. Another loads laundry into a washer-dryer—and again doesn’t pick up an item it drops. Yet another 03 is struggling to fold t-shirts. But a week ago, it didn’t exist.
—With reporting by Dilys Ng/California