AI moves from digital to physical
For most of the past decade, AI lived almost entirely in the digital world: cloud services, software workflows, online recommendations, and enterprise productivity tools. Even the launch of ChatGPT in late 2022—just three years ago—was still a fundamentally digital breakthrough.
But we are now entering a new era where AI increasingly governs physical systems. These systems touch all parts of the economy: manufacturing, transportation, logistics, healthcare, construction, energy, and defense.
In popular imagination, “a robot” often evokes the humanoid droids of Star Wars. But humanoids are just one category.
The real robotics landscape includes industrial arms, warehouse pickers, mobile robots, drones, surgical robots, autonomous vehicles, agricultural machines, defense systems, and more. Each category is advancing—and commercializing—at the same time.
In reality, a robot is any programmable machine that is capable of sensing its environment, processing information, and taking actions—autonomously or semi-autonomously—to achieve a goal in the physical world.
Robots have three defining features. They sense, they compute, and they perform some kind of physical action.
Engagement with the physical world is what differentiates robots from AI agents, which are also on the cusp of massive proliferation.
AI agents behave autonomously as well, but they are purely digital systems. An AI agent can make you a dinner reservation, but a robot can make you dinner (if not right now, then in the not so distant future).
Why the acceleration is happening now
AI capability has crossed a critical threshold: Large Language Models (LLMs) like ChatGPT can now perceive environments, reason about actions, and reliably control physical devices. These foundation AI models are allowing robot makers to resolve core limitations that previously stalled commercial deployment.
Meanwhile, the ongoing AI data center build-out, which is proceeding at a blistering pace, has created abundant compute capacity. As a result, the cost of AI inference has dropped.
Inference is the process of using a trained AI model to generate a real-time output, such as identifying an object, interpreting sensor data, choosing an action, or producing a response.
In other words, inference is AI applying what it has already learned to a specific situation in front of it.
In robotics, inference is what allows a robot to “think on the fly,” turning perception and context into decisions that guide movement and behavior.
As the world’s biggest tech companies deploy hundreds of billions of dollars into AI data centers, the cost of tapping into AI models is rapidly declining.
Data centers essentially house supercomputers. Robots and other devices can access this intellectual horsepower to “think” more cheaply and more effectively than they could with only on-board hardware.
Just as human beings may lean on a chatbot like ChatGPT to find solutions to their problems as they go about their day, so can robots. Like humans, they are capable of communicating wirelessly over the internet to consult the much larger artificial brains housed within data centers.
Declining costs
Robots therefore no longer need to carry expensive processors to perform complex tasks. Much of that intelligence can live in the cloud. This allows the machines themselves to be simpler, lighter, and more affordable.
Lower costs also widen the range of real-world jobs where robots make economic sense. Robots are now capable of handling more complicated tasks like sorting messy items, navigating busy warehouses, or inspecting products on the fly.
Graphic Processing Units (GPUs) are the elaborate semiconductors that power AI models, and they keep getting better and better. As GPUs and LLM architectures experience continuous improvement, the cost per inference (each action or prediction the model makes) falls sharply.
Companies like NVIDIA (NVDA), the dominant producer of GPUs, are constantly advancing the capabilities of their chips and systems. The data centers they supply thereby become more powerful and efficient, which makes it economically viable to embed AI into more devices, robots, and industrial workflows.
This also creates a powerful feedback loop: cheaper AI enables more robots, more robots generate more data, and more data makes the AI models even better. As that loop begins, adoption tends to accelerate quickly—and that’s exactly what we’re seeing now.
Robots in the real world
Industrial automation sits at the center of this shift. More than four million industrial robots are now operating globally, an all-time high.
Automation is not a new force. While global competition is often cited in debates over the long-term decline of U.S. manufacturing employment, the far more powerful and consistent driver has been automation—a trend underway since the end of World War II.