Vietnam became the first Southeast Asian country to implement a comprehensive AI law on March 1, drawing inspiration from the EU's AI Act while charting its own regulatory course with fault-based liability rules and broad prohibitions on harmful AI uses. The Law on Artificial Intelligence reflects Hanoi's push for what Communist Party chief To Lam calls the "era of national rise," a vision aimed at transforming Vietnam into a high-income developed nation by 2045, with technology as a key engine. How Does Vietnam's AI Law Differ From Europe's Regulatory Model? Vietnam's approach diverges significantly from the EU's harm-based liability framework. Instead, the law establishes fault-based liability, meaning companies are responsible only if they acted negligently or intentionally. This distinction matters because it shifts accountability in unexpected ways. For example, if a user misuses ChatGPT to create toxic content, the user bears responsibility as long as the AI system was properly labeled as AI-generated, not the company that built it. "From a technical perspective, there's no restriction in the Vietnam law which says that you can't have self-driving cars, for example. You could have automation, you could have AI making decisions, but the responsibility of that will be with a human," said Rohit Kumar, CEO of Risk AI Technologies. Rohit Kumar, CEO at Risk AI Technologies The law also takes a deliberately broad approach to prohibited acts, which include exploiting AI for unlawful purposes, creating deepfakes to deceive or manipulate, and disseminating forged materials that threaten national security or public order. This intentional broadness grants local authorities extensive enforcement flexibility, though it creates uncertainty about how rules will be applied in practice. What Are the Key Requirements for AI Companies Operating in Vietnam? The law establishes a risk-based classification system that requires AI companies to self-classify their products and notify the Ministry of Science and Technology before deploying medium or high-risk systems. High-risk AI systems face the most stringent scrutiny, including risk assessments, human oversight, registration in a national database, and incident reporting. - Self-Classification Requirement: AI companies must determine whether their products fall under high, medium, or low-risk categories before market deployment. - Labeling Obligations: Both providers and deployers must label AI-generated images, video, and audio to prevent deception and align with Vietnam's updated Cybersecurity Law. - Local Contact Point: Foreign providers of high-risk AI systems must establish a local contact point in Vietnam to facilitate regulatory oversight and communication. - Routine Audits: Medium and high-risk systems are subject to regular audits to ensure ongoing compliance with safety and ethical standards. The draft decision defining high-risk AI systems has drawn the most concern from AI companies. Local tech startups worry that stringent criteria could delay deployment, add large administrative burdens, and slow innovation. However, the law includes a 12 to 18-month grace period for existing AI systems to comply. Does Vietnam's Law Support Domestic AI Development? Despite strict oversight mechanisms, Vietnam's AI Law includes provisions designed to propel the domestic AI industry forward. The law establishes plans for national AI infrastructure with a national AI database, human resource development programs, and financial incentives through an AI Development Fund. These support mechanisms aim to help small and medium-sized enterprises (SMEs) navigate compliance while building competitive advantage. "The AI Law offers quite a lot of support to SMEs," noted Nguyen Duc Lam, advisor at the Hanoi-based Institute of Policy Studies. Nguyen Duc Lam, Advisor at Institute of Policy Studies The law was deliberately left broad in certain areas to allow for flexibility. For instance, the criteria for high-risk AI systems will be updated annually by the Prime Minister, enabling the government to adjust rules as technology evolves and implementation challenges emerge. Why Are Industry Groups Concerned About Implementation Speed? The AI Law was drafted in just three months and went through multiple consultation rounds with AI companies, industry groups, research institutes, and international experts. However, industry representatives argue the timeline was too rushed for thorough analysis and substantive feedback. "As we have seen in the case of the EU AI Act and Korea's Basic AI Law, rushed implementation creates regulatory uncertainty, compliance bottlenecks, and the need for subsequent clarifications to address unintended consequences," said Jonathan McHale, Vice President of Digital Trade at the Computer and Communications Industry Association. Jonathan McHale, Vice President of Digital Trade at Computer and Communications Industry Association The Business Software Alliance, which represents OpenAI, Microsoft, Adobe, and others, stated that the timelines were "insufficient for stakeholders to analyze the document rigorously or provide substantive feedback." Industry groups have called for more time to prepare, warning that rushed implementation "could deter market entry and limit the benefits of AI investment". Vietnam's AI Law represents a middle ground between Europe's detailed regulatory approach and lighter-touch frameworks elsewhere in Asia. As the first comprehensive AI regulation in Southeast Asia, it will likely influence how other countries in the region approach AI governance, making its implementation outcomes crucial for the broader region's AI future.