Thursday, February 5, 2026
Mobile Offer

🎁 You've Got 1 Reward Left

Check if your device is eligible for instant bonuses.

Unlock Now
Survey Cash

🧠 Discover the Simple Money Trick

This quick task could pay you today — no joke.

See It Now
Top Deals

📦 Top Freebies Available Near You

Get hot mobile rewards now. Limited time offers.

Get Started
Game Offer

🎮 Unlock Premium Game Packs

Boost your favorite game with hidden bonuses.

Claim Now
Money Offers

💸 Earn Instantly With This Task

No fees, no waiting — your earnings could be 1 click away.

Start Earning
Crypto Airdrop

🚀 Claim Free Crypto in Seconds

Register & grab real tokens now. Zero investment needed.

Get Tokens
Food Offers

🍔 Get Free Food Coupons

Claim your free fast food deals instantly.

Grab Coupons
VIP Offers

🎉 Join Our VIP Club

Access secret deals and daily giveaways.

Join Now
Mystery Offer

🎁 Mystery Gift Waiting for You

Click to reveal your surprise prize now!

Reveal Gift
App Bonus

📱 Download & Get Bonus

New apps giving out free rewards daily.

Download Now
Exclusive Deals

💎 Exclusive Offers Just for You

Unlock hidden discounts and perks.

Unlock Deals
Movie Offer

🎬 Watch Paid Movies Free

Stream your favorite flicks with no cost.

Watch Now
Prize Offer

🏆 Enter to Win Big Prizes

Join contests and win amazing rewards.

Enter Now
Life Hack

💡 Simple Life Hack to Save Cash

Try this now and watch your savings grow.

Learn More
Top Apps

📲 Top Apps Giving Gifts

Download & get rewards instantly.

Get Gifts
Summer Drinks

🍹 Summer Cocktails Recipes

Make refreshing drinks at home easily.

Get Recipes

Latest Posts

ChatGPT sucks at being a real robot


There’s something sad about seeing a humanoid robot lying on the floor. Without any electricity, these bipedal machines can’t stand up, so if they’re powered down and not hanging from a winch, they’re sprawled out on the floor, staring up at you, helpless.

That’s how I met Atlas a couple of months ago. I’d seen the robot on YouTube a hundred times, running obstacle courses and doing backflips. Then I saw it on the floor of a lab at MIT. It was just lying there. The contrast is jarring, if only because humanoid robots have become so much more capable and ubiquitous since Atlas got famous on YouTube.

Across town at Boston Dynamics, the company that makes Atlas, a newer version of the humanoid robot had learned not only to walk but also to drop things and pick them back up instinctively, thanks to a single artificial intelligence model that controls its movement. Some of these next-generation Atlas robots will soon be working on factory floors — and may venture further. Thanks in part to AI, general-purpose humanoids of all types seem inevitable.

“In Shenzhen, you can already see them walking down the street every once in a while,” Russ Tedrake told me back at MIT. “You’ll start seeing them in your life in places that are probably dull, dirty, and dangerous.”

Tedrake runs the Robot Locomotion Group at the MIT Computer Science and Artificial Intelligence Lab, also known as CSAIL, and he co-led the project that produced the latest AI-powered Atlas. Walking was once the hard thing for robots to learn, but not anymore. Tedrake’s group has shifted focus from teaching robots how to move to helping them understand and interact with the world through software, namely AI. They’re not the only ones.

In the United States, venture capital investment robotics startups grew from $42.6 million in 2020 to nearly $2.8 billion in 2025. Morgan Stanley predicts the cumulative global sales of humanoids will reach 900,000 in 2030 and explode to more than 1 billion by 2050, the vast majority of which will be for industrial and commercial purposes. Some believe these robots will ultimately replace human labor, ushering in a new global economic order. After all, we designed the world for humans, so humanoids should be able to navigate it with ease and do what we do.

Janik Söllner for Vox

They won’t all be factory workers, if certain startups get their way. A company called X1 Technologies has started taking preorders for its $20,000 home robot, Neo, which wears clothes, does dishes, and fetches snacks from the fridge. Figure AI introduced its Figure 03 humanoid robot, which also does chores. Sunday Robotics said it would have fully autonomous robots making coffee in beta testers’ homes next year.

So far, we’ve seen a lot of demos of these AI-powered home robots and promises from the industrial humanoid makers, but not much in the way of a new global economic order. Demos of home robots, like the X1 Neo, have relied on human operators, making these automatons, in practice, more like puppets. Reports suggest that Figure AI and Apptronik have only one or two robots on manufacturing floors at any given time, usually doing menial tasks. That’s a proof of concept, not a threat to the human work force.

“In order to make them better, we have to make AI better.”

You can think of all these robots as the physical embodiment of AI, or just embodied AI. This is what happens when you put AI into a physical system, enabling it to interact with the real world. Whether that’s in the form of a humanoid robot or an autonomous car, it’s the next frontier for hardware and, arguably, technological progress writ large.

Embodied AI is already transforming how farming works, how we move goods around the world, and what’s possible in surgical theaters. We might be just one or two breakthroughs away from walking, talking, thinking machines that can work alongside us, unlocking a whole new realm of possibilities. “Might” is the key word there.

“If we’re looking for robots that will work side by side with us in the next couple of years, I don’t think it will be humanoids,” Daniela Rus, director of CSAIL, told me not long after I left Tedrake’s lab. “Humanoids are really complicated, and we have to make them better. And in order to make them better, we have to make AI better.”

So to understand the gap between the hype around humanoids and the technology’s real promise, you have to know what AI can and can’t do for robots. You also, unfortunately, have to try to understand what Elon Musk has been up to at Tesla for the past five years.

It’s still embarrassing to watch the part of the Tesla AI Day presentation in 2021 when a human person dressed in a robot costume appears on stage dancing to dubstep music. Musk eventually stops the dance and announces that Tesla, “a robotics company,” will have a prototype of a general-purpose humanoid robot, now known as Optimus, the following year. Not many people believed him, and now, years later, Tesla still has not delivered a fully functional Optimus. Never afraid to make a prediction, Musk told audiences at Davos in January 2026 that Tesla’s robot will go on sale next year.

“People took him seriously because he had a great track record,” said Ken Goldberg, a roboticist at the University of California-Berkeley and co-founder of Ambi Robotics. “I think people were inspired by that.”

You can imagine why people got excited, though. With the Optimus robot, Elon Musk promised to eliminate poverty and offer shareholders “infinite” profits. He said engineers could effectively translate Tesla’s self-driving car technology into software that could power autonomous robots that could work in factories or help around the house. It’s a version of the same vision humanoid robotics startups are chasing today, albeit colored by several years of Musk’s unfulfilled promises.

We now know that Optimus struggles with a lot of the same problems as other attempts at general-purpose humanoids. It often requires humans to remotely operate it, and it struggles with dexterity and precision. The 1X Neo, likewise, needed a human’s help to open a refrigerator door and collapsed onto the floor in a demo for a New York Times journalist last year. The hardware seems capable enough. Optimus can dance, and Neo can fold clothes, albeit a bit clumsily. But they don’t yet understand physics. They don’t know how to plan or to improvise. They certainly can’t think.

“People in general get too excited by the idea of the robot and not the reality.”

“People in general get too excited by the idea of the robot and not the reality,” said Rodney Brooks, co-founder of iRobot, makers of the Roomba robot vacuum. Brooks, a former CSAIL director, has written extensively and skeptically about humanoid robots.

Clearly, there’s a gap between what’s happening in research labs and what’s being deployed in the real world. Some of the optimism around humanoids is based on good science, though. In 2023, Tedrake coauthored a landmark paper with Tony Zhao, co-founder and CEO of Sunday Robotics, that outlined a novel method for training robots to move like humans. It involves humans performing the task wearing sensor-laden gloves that send data to an AI model that enables the robot to figure out how to do those tasks. This complemented work Tedrake was doing at the Toyota Research Institute that used the same kinds of methods AI models use to generate images to generate robot behavior. You’ve heard of large language models, or LLMs. Tedrake calls these large behavior models, or LBMs.

It makes sense. By watching humans do things over and over, these AI models collect enough data to generate new behaviors that can adapt to changing environments. Folding laundry, for example, is a popular example of a task that requires nimble hands and better brains. If a robot picks up a shirt and the fabric flops down in an unexpected way, it needs to figure out how to handle that uncertainty. You can’t simply program it to know what to do when there are so many variables. You can, however, teach it to learn.

That’s what makes the lemonade demo so impressive. Some of Rus’s students at CSAIL have been teaching a humanoid robot named Rudy to make lemonade — something that you might want a robot butler to do one day — by wearing sensors that measure not only the movements but the forces involved. It’s a combination of delicate movements, like pouring sugar, and strong ones, like lifting a jug of water. I watched Ruby do this without spilling a drop. It hadn’t been programmed to make lemonade. It had learned.

The real challenge is getting this method to scale. One way is simply to brute force it: Employ thousands of humans to perform basic tasks, like folding laundry, to build foundation models for the physical world. Foundation models are the massive datasets that can be adapted to specific tasks like generating text, images, or in this case, robot behavior. You can also get humans to teleoperate countless robots in order to train these models. These so-called arm farms already exist in warehouses in Eastern Europe, and they’re about as dystopian as they sound.

Another option is YouTube. There are a lot of how-to videos on YouTube, and some researchers think that feeding them all into an AI model will provide enough data to give robots a better understanding of how the world works. These two-dimensional videos are obviously limited, if only because they can’t tell us anything about the physics of the objects in the frame. The same goes for synthetic data, which involves a computer rapidly and repeatedly carrying out a task in a simulation. The upside here, of course, is more data, more quickly. The downside is that the data isn’t as good, especially when it comes to physical forces like friction and torque, which also happen to be the most important for robot dexterity.

“Physics is a tough task to master,” Brooks said. “And if you have a robot, which is not good with physics, in the presence of people, it doesn’t end well.”

an illustration of a robot butler tripping up some stairs. Food and drinks fly everywhere.

Janik Söllner for Vox

That’s not even taking into account the many other bottlenecks facing robotics right now. While components have gotten cheaper — you can buy a humanoid robot right now for less than $6,000, compared to the $75,000 it cost to buy Boston Dynamics’ small, four-legged robot Spot five years ago — batteries represent a major bottleneck for robotics, limited the run time of most humanoids to two to four hours.

Then you have the problem with processing power. The AI models that can make humanoids more human require massive amounts of compute. If that’s done in the cloud, you’ve got latency issues, preventing the robot from reacting in real time. And inevitably, to tie a lot of other constraints into a tidy bundle, the AI is just not good enough.

If you trace the history of AI and the history of robotics back to their origins, you’ll see a braided line. The two technologies have intersected time and again, since the birth of the term “artificial intelligence” at a Dartmouth summer research workshop in the summer of 1956. Then, half a century later, things started heating up on the AI front, when advances in machine learning and powerful processors called GPUs — the things that have now made Nvidia a $5 trillion company — ushered in the era of deep learning. I’m about to throw a few technical terms at you so bear with me.

Machine learning is a type of AI. It’s when algorithms look for patterns in data and make decisions without being explicitly trained to do so. Deep learning takes it to another level with the help of a machine learning model called a neural network. You can think of a neural network, a concept that’s even older than AI, as a system loosely modeled on the human brain that’s made up of lots of artificial neurons that do math problems. Deep learning uses multilayered neural networks to learn from huge data sets and to make decisions and predictions. Among other accomplishments, neural networks have revolutionized computer vision to improve perception in robots.

There are different architectures for neural networks that can do different things, like recognize images or generate text. One is called a transformer. The “GPT” in ChatGPT stands for “generative pre-trained transformer,” which is a type of large language model, or LLM, that powers many generative AI chatbots. While you’d think LLMs would be good at making robots think, they really aren’t. Then there are diffusion models, which are often used for image generation and, more recently, making robots appear to think. The framework that Tedrake and his coauthors described in their 2023 research into using generative AI to train robots is based on diffusion.

“Under the hood, what’s actually going on should be something much more like our own brains”

Three things stand out in this very limited explanation of how AI and robots get along. One is that deep learning requires a massive amount of processing power and, as a result, a huge amount of energy. The other is that the latest AI models work with the help of stacks of neural networks whose millions or even billions of artificial neurons do their magic in mysterious and usually inefficient ways. The third thing is that, while LLMs are good at language, and diffusion models are good at images, we don’t have any models that are good enough at physics to send a 200-pound robot marching into a crowd to shake hands and make friends.

As Josh Tenenbaum, a computational cognitive scientist at MIT, explained to me recently, an LLM can make it easier to talk to a robot, but it’s hardly capable of being the robot’s brains. “You could imagine a system where there’s a language model, there’s a chatbot, you want to talk to your robot,” Tenenbaum said. “Under the hood, what’s actually going on should be something much more like our own brains and minds or other animals, not just humans in terms of how it’s embodied and deals with the world.”

So we need better AI for robots, if not in general. Scientists at CSAIL have been working on a couple of physics-inspired and brain-like technologies they’re calling liquid neural networks and linear optical networks. They both fall into the category of state-spaced models, which are emerging as an alternative or rival to transformer-based models. Whereas transformer-based models look at all available data to identify what’s important, state-spaced models are much more efficient, as they maintain a summary of the world that gets updated as new data comes in. It’s closer to how the human brain works.

To be perfectly honest, I’d never heard of state-space models until Rus, the CSAIL director, told me about them when we chatted in her office a few weeks ago. She pulled up a video to illustrate the difference between a liquid neural network and a traditional model used for self-driving cars. In it, you can see how the traditional model focuses its attention on everything but the road, while the newer state-space model only looks at the road. If I’m riding in that car, by the way, I want the AI that’s watching the road.

“And instead of a hundred thousand neurons,” Rus says, referring to the traditional neural network, “I have only 19.” And here’s where it gets really compelling. She added, “And because I have only 19, I can actually figure out how these neurons fire and what the correlation is between these neurons and the action of the car.”

You may have already heard that we don’t really know how AI works. If newer approaches bring us a little bit closer to comprehension, it certainly seems worth taking them seriously, especially if we’re talking about the kinds of brains we’ll put in humanoid robots.

When a humanoid robot loses power, when electricity stops flowing to the motors that keep it upright, it collapses into a heap of heavy metal parts. This can happen for any number of reasons. Maybe it’s a bug in the code or a lost wifi connection. And when they’re on, humanoids are full of energy as their joints fight gravity or stand ready to bend. If you imagine being on the wrong side of that incredible mechanical power, it’s easy to doubt this technology.

Some companies that make humanoid robots also admit that they’re not very useful yet. They’re too unreliable to help out around the house, and they’re not efficient enough to be helpful in factories. Furthermore, most of the money being spent developing robots is being spent on making them safe around people. When it comes to deploying robots that can contribute to productivity, that can participate in the economy, it makes a lot more sense to make them highly specialized and not human-shaped.

“Let’s not do open heart surgery right away with these things”

The embodied AI that will transform the world in the near future is what’s already out there. In fact, it’s what’s been out there for years. Early self-driving cars date back to the 1980s, when Ernst Dickmanns put a vision-guided Mercedes van on the streets of Munich. Researchers from Carnegie Mellon University got a minivan to drive itself across the United States in 1995. Now, decades later, Waymo is operating its robotaxi service in a half dozen American cities, and the company says its AI-powered cars actually make the roads safer for everyone.

Then there are the Roombas of the world, the robots that are designed to do one thing and keep getting better at it. You can include the vast array of increasingly intelligent manufacturing and warehouse robots in this camp too. By 2027, the year Elon Musk is on track to miss his deadline to start selling Optimus humanoids to the public, Amazon will reportedly replace more than 600,000 jobs with robots. These would probably be boring robots, but they’re safe and effective.

Science fiction promised us humanoids, however. Pick an era in human history, in fact, and someone was dreaming about an automaton that could move like us, talk like us, and do all our dirty work. Replicants, androids, the Mechanical Turk — all these humanoid fantasies imagined an intelligent synthetic self.

Reality gave us package-toting platforms on wheels roving around Amazon warehouses or the sensor-heavy self-driving cars clogging San Francisco streets. In time, even the skeptics think that humanoids will be possible. Probably not in five years, but maybe in 50, we’ll get artificially intelligent companions who can walk alongside us. They’ll take baby steps.

“Good robots are going to be clumsy at first, and you have to find applications where it’s okay for the robot to make mistakes and then recover,” Tedrake said. “Let’s not do open-heart surgery right away with these things. This is more like folding laundry.”



Source link

Mobile Offer

🎁 You've Got 1 Reward Left

Check if your device is eligible for instant bonuses.

Unlock Now
Survey Cash

🧠 Discover the Simple Money Trick

This quick task could pay you today — no joke.

See It Now
Top Deals

📦 Top Freebies Available Near You

Get hot mobile rewards now. Limited time offers.

Get Started
Game Offer

🎮 Unlock Premium Game Packs

Boost your favorite game with hidden bonuses.

Claim Now
Money Offers

💸 Earn Instantly With This Task

No fees, no waiting — your earnings could be 1 click away.

Start Earning
Crypto Airdrop

🚀 Claim Free Crypto in Seconds

Register & grab real tokens now. Zero investment needed.

Get Tokens
Food Offers

🍔 Get Free Food Coupons

Claim your free fast food deals instantly.

Grab Coupons
VIP Offers

🎉 Join Our VIP Club

Access secret deals and daily giveaways.

Join Now
Mystery Offer

🎁 Mystery Gift Waiting for You

Click to reveal your surprise prize now!

Reveal Gift
App Bonus

📱 Download & Get Bonus

New apps giving out free rewards daily.

Download Now
Exclusive Deals

💎 Exclusive Offers Just for You

Unlock hidden discounts and perks.

Unlock Deals
Movie Offer

🎬 Watch Paid Movies Free

Stream your favorite flicks with no cost.

Watch Now
Prize Offer

🏆 Enter to Win Big Prizes

Join contests and win amazing rewards.

Enter Now
Life Hack

💡 Simple Life Hack to Save Cash

Try this now and watch your savings grow.

Learn More
Top Apps

📲 Top Apps Giving Gifts

Download & get rewards instantly.

Get Gifts
Summer Drinks

🍹 Summer Cocktails Recipes

Make refreshing drinks at home easily.

Get Recipes

Latest Posts

Don't Miss

Stay in touch

To be updated with all the latest news, offers and special announcements.