Moz1 Robot and China’s Race for Real-World AI

Vivi Carter · 19, July 2025
A Sector in Overdrive: The Battle for “Real-World AI” Heats Up
While much of the last decade’s AI excitement has been focused on algorithms talking, writing, and drawing in a purely digital world, 2025 is shaping up to be the year that intelligent machines make the leap into the physical. “Embodied intelligence”—the age of robots that don’t just think, but sense, move, and get work done—has become the next big arena.
From California’s Silicon Valley to research hubs across China, global tech titans and hungry AI startups are racing to take their models out of the cloud and into homes, warehouses, offices, and factories. Companies like Google DeepMind, Figure AI with its end-to-end Helix model, and UC Berkeley’s Physical Intelligence Lab (π series) have set the pace, training robots that can tackle ever more nuanced tasks, from folding laundry to managing kitchen chores.
Amid this surge, only a handful of startups are making the jump from flashy demo videos to serious market traction. In China, Qianxun Intelligence (a company founded early 2024—is emerging as the sector’s most-watched dark horse.)
_3.webp)
Major Capital Bets On Real-World Robotics
Just two months after pulling in a $75M investment in April, Qianxun Intelligence is back in the headlines. This June, the company snapped up nearly $83M more in a new round led by e-commerce giant JD.com, alongside major funds like China Internet Investment Fund and top provincial innovation vehicles. That’s over $160M in venture capital in less than eighteen months—rocket fuel for hardware, talent, and global scale.
This fast-track support from institutional money signals not just faith in the company's business case, but a broader sense that embodied AI is sprinting from research hype to real-world scale. As JD.com’s involvement suggests, Qianxun’s technology already aligns closely with next-gen logistics and operational automation. The funding validates the belief that “robotics + big data AI + relentless commercialization” is a recipe for future industrial value.
Meet Moz1: The Office Robot Redefining “Assistant”
What put Qianxun on the global radar? Its new humanoid robot, Moz1, may matter more than any fancy investor deck.
Moz1 isn't just another “bipedal marvel.” It pulls off things that have stumped robots for decades:
- Twenty-six degrees of freedom, powered by in-house, high-torque joints with a 15% power density lead on Tesla’s celebrated Optimus (source: Sina Tech).
- An industry-first, high-precision, full-body force control system, enabling it to execute precise motions—from single-sheet tissue grabbing to well-coordinated conference room cleaning.
These feats go beyond party tricks: Moz1 has, for example, been deployed in office environments to independently clean up meeting rooms, sort supplies, and handle mundane chores that eat up human time. Thanks to sensor suites and a proprietary VLA (Vision-Language-Action) brain model, Moz1 acts less like a remote-controlled toy and more like an autonomous colleague.
Under the Hood: The VLA Model and Learning by Doing
Moz1’s “secret sauce” is its home-grown Spirit v1 VLA model, which integrates multiple streams of human behavior—vision, spoken/written language, and physical manipulation. This end-to-end system, like the one powering Figure AI, learns new tasks not just from code, but by extracting information from thousands of hours of video, operator demonstrations, and real-world trial and error.
For instance, folding laundry—a seemingly simple but actually complex task due to fabric variability—becomes possible with ~80% success rates. (For context, similar models developed at UC Berkeley, see “π-System”, have shown how combining video, imitation, and reinforcement learning unlocks astonishing generalization.)
Moz1’s training stack fuses:
- 70% raw video data for visual/action context,
- 20% operator imitation, fine-tuned via the company’s EfficientImitate algorithm,
- 10% state-of-the-art self-learned strategies (see EfficientZero).
This approach supercharges generalization: robots no longer just memorize routines—they adapt to new scenes, tasks, tools, and surprises.
Qianxun has further advanced the game with its OneTwoVLA framework, eliminating the classic split between “reasoning” and “acting” and merging them into one powerful Transform-based architecture (preprint). The result? Robots that whip up hotpot, mix cocktails, and recover gracefully from fumbles—real multitasking, flexible helpers.
Market Strategy: Commercialization Beyond Research Demos
Crucially, Qianxun’s success doesn’t just come from technical flair. The company sprinted to market by laser-focusing on “pain points with high willingness to pay” in energy, logistics, hospitality, and healthcare. Its product team spent months mapping real-world workflows and iterating robot features with large, diverse clients. This feedback loop—observe needs → build → test → deploy—sharpened the business case, insulating it against “demo-only syndrome.”
In the office, Moz1 isn’t a showroom piece. It reliably handles cleaning, sorting, and mundane admin, maximizing use of human talent for strategic work. For logistics, its adaptability in picking, packing, and even dynamic rerouting unlocks efficiency today's automation can’t match. As labor costs rise worldwide, robots like Moz1 are poised to reshape not just factories but offices, hospitals, and even homes.
The Road Ahead: Distinctive Technology and Global Ambitions
What truly sets Qianxun apart?
- Relentless technical iteration, fusing hardware and learning systems tailored for real-world variance.
- A world-class team, recruiting top robotics and AI talent, merging East and West technical approaches.
- A market-first mindset—not chasing media sizzle, but practical deployments in high-value segments.
As Qianxun rolls out from the Chinese market to international industries—and with code, models, and research increasingly open-sourced ([see GitHub release and Arxiv](https://github.com/qianxun-ai))—the company embodies a new vision for AI: not just writing stories, but folding your shirts and keeping the office tidy.
In a rapidly evolving field where “general-purpose robotics” once meant “endless promise, few results,” Moz1 and its kin are beginning to deliver. If you care about the future of real-world AI, keep your eyes on Qianxun, and the global cohort of startups that are making machines that do, not just think.
_2.webp)