Vietnam just became the first Southeast Asian nation to enforce a comprehensive AI law, and it's taking a strikingly different approach than Europe or the US. On March 1, the Law on Artificial Intelligence came into effect, drawing inspiration from the EU's AI Act while charting its own course with fault-based liability rules, broad prohibitions on harmful AI use, and mandatory risk classifications for companies. The law reflects Vietnam's broader push toward what Communist Party chief To Lam calls the "era of national rise," a vision for transforming the country into a high-income developed nation by 2045, with technology as a key engine. What Makes Vietnam's AI Law Different From Europe's? While the European Union built its AI Act around a harm-based liability model, Vietnam took a different path. Instead of holding companies liable for harms their AI systems cause, Vietnam assigns responsibility to humans who deploy or use the technology. This distinction matters because it reflects a fundamentally different philosophy about who should be accountable when AI goes wrong. "From a technical perspective, there's no restriction in the Vietnam law which says that you can't have self-driving cars, for example. You could have automation, you could have AI making decisions, but the responsibility of that will be with a human," said Rohit Kumar, CEO of Risk AI Technologies. Rohit Kumar, CEO of Risk AI Technologies The law also takes a notably broad approach to what it prohibits. Rather than listing specific AI applications that are banned, Vietnam's rules target harmful outcomes: exploiting AI for unlawful purposes, creating deepfakes to deceive or manipulate people, and spreading forged materials that threaten national security or public order. This intentional broadness gives local authorities significant flexibility in enforcement, though it also creates uncertainty for companies trying to understand exactly what's allowed. How Do Companies Actually Comply With Vietnam's New Rules? The law requires AI companies operating in Vietnam to self-classify their products into three risk categories: high, medium, or low. Those offering medium or high-risk systems must notify the Ministry of Science and Technology before deployment and submit to routine audits. Additionally, all providers and deployers must label AI-generated images, video, and audio, a requirement that aligns with Vietnam's updated Cybersecurity Law taking effect in July. - Risk Classification: Companies must self-assess whether their AI systems fall into high, medium, or low-risk categories and notify authorities before deploying medium or high-risk systems. - Content Labeling: All AI-generated images, video, and audio must be clearly labeled as such, helping prevent the spread of unlabeled deepfakes and synthetic media. - Local Presence: Foreign providers of high-risk AI systems are required to establish a local contact point in Vietnam, ensuring accountability and accessibility for regulators. - Incident Reporting: High-risk AI systems must be registered in a national database, and operators must report incidents or problems to authorities. The law came together remarkably fast. Drafted in just three months and rushed through multiple consultation rounds with AI companies, industry groups, research institutes, and international experts, the compressed timeline raised concerns from industry advocates. Wong Wai San, Director of Policy for Asia-Pacific at the Business Software Alliance, noted that the timeline was "insufficient for stakeholders to analyze the document rigorously or provide substantive feedback". Wai San, Director of Policy for Asia-Pacific at the Business Software Alliance Why Are Tech Companies Worried About Implementation? The biggest source of anxiety centers on how Vietnam will define high-risk AI systems. The draft decision outlining the scope and criteria for high-risk classification has drawn intense scrutiny because these systems face the most stringent regulatory requirements, including risk assessments, human oversight, national database registration, and incident reporting. For foreign providers, the requirement to establish a local contact point adds another layer of administrative burden. Local startups worry that overly strict criteria could delay product launches and create administrative bottlenecks that disproportionately hurt smaller companies. The Computer and Communications Industry Association, which represents tech giants like Amazon, Apple, Google, and Meta, warned that rushed implementation could "deter market entry and limit the benefits of AI investment". "As we have seen in the case of the EU AI Act and Korea's Basic AI Law, rushed implementation creates regulatory uncertainty, compliance bottlenecks, and the need for subsequent clarifications to address unintended consequences," said Jonathan McHale, Vice President of Digital Trade at the Computer and Communications Industry Association. Jonathan McHale, Vice President of Digital Trade at the Computer and Communications Industry Association The law does provide a 12 to 18-month grace period for existing AI systems to come into compliance, giving companies time to adjust. However, many industry groups have called for even more time to prepare implementation guidance and avoid the kind of regulatory chaos that followed the EU AI Act's rollout. What Support Does Vietnam Offer to Its Own AI Industry? While the law imposes strict rules, it also includes provisions designed to nurture Vietnam's domestic AI sector. The legislation creates plans for national AI infrastructure, including a national AI database, human resource development programs, and financial incentives through an AI Development Fund. These measures suggest that Vietnam sees regulation not as a barrier to innovation, but as a framework that can actually accelerate responsible AI development. "The AI Law offers quite a lot of support to SMEs," noted Nguyen Duc Lam, advisor at the Hanoi-based Institute of Policy Studies, urging local startups to look beyond the risk classification requirements. Nguyen Duc Lam, Advisor at the Institute of Policy Studies The law's approach to updating rules also reflects flexibility. The criteria for high-risk AI systems will be updated annually and issued by the Prime Minister, allowing the government to adjust requirements as technology evolves and implementation experience accumulates. This adaptive approach contrasts with more rigid regulatory frameworks that require lengthy legislative processes to change. What Does This Mean for the Global AI Regulation Landscape? Vietnam's law matters beyond Southeast Asia because it demonstrates how smaller nations can write their own AI rules rather than simply adopting frameworks from the EU or US. The fault-based liability approach, in particular, may prove influential. Rohit Kumar noted that this model is likely to spread globally, pointing to how banks and financial institutions already operate autonomous AI systems while keeping humans accountable for their functioning. The law also shows how governments can balance competing priorities: encouraging innovation while protecting citizens from harmful AI applications, maintaining digital sovereignty, and building domestic capacity. As more countries develop their own AI regulations, Vietnam's approach offers a template for how developing nations can assert control over technology deployment without simply importing rules designed for wealthier, more industrialized economies.