He collapsed, noting later that the kick was powerful enough to cause injury without protective equipment.
According to the company, the demonstration was to dispel doubts that previous videos showing acrobatic robots were done with CGI.
Whether the robot acted autonomously or was remotely controlled, the event raises a crucial question: What guarantees exist that robots will not harm humans in real life?
In 1942, imagining a future where humans and robots coexist, science fiction writer Isaac Asimov laid out three fundamental principles of "robot ethics".
The first and most important is that a robot must not harm a human being, nor, through inaction, allow a human to come to harm.
From a literary idea, the principle of nonmaleficence became the foundation for many later discussions on technology ethics, from data to robotics and artificial intelligence.
Standardized guidelines like BS 8611 from the British Standards Institution (BSI) for robot design and application follow the same logic: Identify risks, implement safeguards and treat robots as systems serving human interests.
However, Asimov wrote about "robot ethics" imagining robots with consciousness. In reality, robots today consist only of actuators, sensors, control software, and AI. Therefore, "do no harm" cannot remain a slogan. It must be a technical constraint (safe design, behavioral limits, emergency stops...) and a legal one (standards, accountability, penalties...).
Yet this binding framework is always challenged by another driving force: the race for technological capability.
History shows that major leaps in robotics are often tied to defense needs, where large, long-term investments aim to achieve breakthrough, superior capabilities.
Boston Dynamics is a prime example, with many of its robot development programs funded or commissioned by DARPA, the research agency under the U.S. Department of Defense.
This does not mean all robots will become weapons. But it reminds us that alongside declarations of humanity, goodwill, and ethics, the real world races for stronger, faster, more efficient systems.
When a system grows large and technologically powerful enough, the question of responsibility inevitably arises: Who is accountable if an accident occurs, and what mechanisms exist to control and prevent such risks?
![]() |
|
An EngineAI's T800 humanoid robot demonstrates martial arts moves. Photo by EngineAI |
With the T800, even if the kick was scripted, it signals that humanoid robots are entering a "physical danger zone."
Responsibility cannot rely on promises alone; it requires mechanisms for thorough verification and supervision.
Coincidentally, while the video was gaining traction, Vietnam passed the Artificial Intelligence Law on Dec. 10, 2025, that incorporates a risk-based governance approach.
The law not only encourages development but also establishes a governance structure with core principles, prohibited behaviors, controlled testing mechanisms (sandbox), and a framework for high-risk AI, with a list that can be updated to avoid becoming obsolete.
Alongside the Data Law and Personal Data Protection Law, Vietnam is gradually developing a legal framework for the digital era.
But laws are just a leash, and whether they can be enforced is another matter entirely.
The biggest challenge is not drafting principles but bridging strategy and execution, addressing direct questions about who assesses risks, based on what criteria and how supervision is conducted?
Based on international organizations like OECD and the EU, I see three urgent requirements for laws to take effect and provide clear guidance for both regulators and operators.
First, a national AI coordinating body, a true AI Office, is needed. This agency will develop and coordinate the national AI program, implementation roadmap, and mobilize public-private resources for strategic objectives. A reference model could be the European Commission’s European AI Office, which plays a key role in supporting and enforcing the EU AI Act.
Second, risk-based AI governance requires evaluation standards and organizations with authority and capacity to assess AI. Risk-based governance cannot stop with cliches like "human-centric, transparent, accountable" but must translate these principles into actionable, verifiable criteria, like what risks exist in training data, how human oversight functions, whether logs are traceable and auditable, bias testing, mechanisms to stop and control abnormal system behavior...
Especially in the case of products combining hardware and software, such as robots, the risks lie not only in the "soft" aspects but also in physical impacts. Guidelines like BS 8611 remind us that ethical risks must be managed as a catalog of hazards, not just promises.
Vietnam could reference models such as the AI Safety Institute in the U.K. or AI Verify Foundation in Singapore.
The general trend is to create institutions, establish evaluation standards, promote testing, and secure AI. The goal is a national AI standard and a reliable assessment and certification mechanism, starting with high-risk AI identified by the law.
Third, strategically, AI should be designed and encouraged in key areas to create substantial impact and quickly establish standards. High-impact sectors such as healthcare, education and public administration are suitable choices due to measurable impact, observable risks and clear social feedback.
These areas also allow iterative learning and sandbox testing under supervision, with outcomes and lessons publicized for broader application.
Choosing the right focus helps avoid a "jack-of-all-trades" AI model, where AI projects are everywhere but ultimately useful nowhere.
The AI Law emerges as the global AI race accelerates. At the same time, kicks like the T800’s remind us that AI and robotics are no longer just future software, and have now entered the physical space, where the smallest of errors can cause real consequences and injuries.
The development of technology requires a balance between encouragement and setting inviolable boundaries with sufficiently strong risk governance mechanisms.
We need both a supportive hand to accelerate innovation and a leash to control what must not exceed limits.
As an ordinary observer, I hope the T800’s kick remains strictly a demonstration, and only in the lab.