Making news

National AI ethics framework issued to guide safe, responsible rollout

According to the circular taking effect on March 10, the framework imposes specific obligations on entities and individuals involved in AI activities.
  The robot, equipped with a 21.5-inch screen, displays queue numbers, estimated wait times, service counters, and QR codes for accessing digital services. Photo: VNA   

The Minister of Science and Technology has signed a circular issuing the National Artificial Intelligence Ethics Framework, designed to steer the research, development, and deployment of AI systems toward outcomes that are safe, responsible, and beneficial to individuals, communities, and society at large.

According to the circular taking effect on March 10, the framework imposes specific obligations on entities and individuals involved in AI activities. In particular, the AI use must ensure safety, reliability, while preventing harm to human life, health, dignity, honour, and mental well-being.

Developers and operators bear responsibility for embedding safety features from the design stage, anticipating potential harmful scenarios, and adopting suitable preventive controls. They must also establish clear quality criteria for data, models, and outputs, alongside internal processes for testing, validation, and verification prior to any deployment.

The framework mandates human oversight and intervention capabilities for all AI-driven decisions and actions, calibrated to the system's potential impact level. Entities and individuals must set up mechanisms to gather feedback, detect errors, initiate corrections, and maintain contingency plans in cases of malfunction or misuse. Robust security protocols must detect and mitigate threats, including unauthorised access, system hijacking, data or model poisoning, adversarial attacks, vulnerability exploitation, data breaches, or other forms of misuse, thereby ensuring the confidentiality, integrity, and availability of data, models, algorithms, and supporting infrastructure.

Emphasis is placed on respect for human and civil rights, with commitments to fairness, transparency, and non-discrimination throughout AI development and use. Entities and individuals must apply appropriate review processes to prevent infringements on privacy, personal data protection, freedom of choice, access to information, the right to equal treatment, and other rights enshrined in law.

Efforts are required to detect and mitigate biases in data, models, and operations, with particular attention to effects on vulnerable groups such as children, the elderly, those with disabilities, and other disadvantaged groups. Entities and individuals must provide clear notifications about AI involvement, delivering reasonable details on system goals, scope, data sources, general operating principles, and known limitations to prevent misconceptions about capabilities.

Moreover, the framework encourages AI use that advances social welfare, inclusivity, and sustainable progress. Entities and individuals should evaluate energy use, computing resources, and environmental footprints across the full AI lifecycle, favouring energy-efficient technologies and low-emission processes. AI system design must follow social ethical norms and Vietnam's cultural identity, while avoiding discriminatory outputs or adverse impacts on community interests.

Innovation and corporate social responsibility receive encouragement under the framework. Responsible experimentation is endorsed, along with open research and knowledge dissemination in accordance with legal regulations, and protection of intellectual property rights.

The framework will undergo periodic review and updates every three years, or sooner in response to major changes in technology, legislation, or management practices.

The issuance reinforces the implementation of the Politburo’s Resolution No. 57-NQ/TW on breakthroughs in sci-tech, innovation, and national digital transformation. It also supports the enforcement of the Law on AI, which entered into force on March 1, 2026./.


top