An artificial intelligence system would be classified as a high-risk one not based on technology, algorithms or models it applies, but on potential impacts of its outputs on the right to life, the right to privacy, the right to access to healthcare, education, and essential services, as well as personal freedoms.
The Ministry of Science and Technology is seeking public comments on the Prime Minister’s draft decision on the list of high-risk artificial intelligence (AI) systems. The move is expected to establish an important legal foundation for the management, supervision and control of risks arising from AI development and deployment, while ensuring a balance between fostering innovation and safeguarding the lawful rights and interests of citizens, organisations and society.
As per the draft, the list of high-risk AI systems is formulated on the basis of risk classification criteria for specific cases, ensuring clarity, transparency and feasibility.

The draft further provides that AI systems intended for large-scale implementation with widespread impacts, those whose risks cannot be fully managed under existing regulations, and those for which regulatory requirements can be uniformly applied, might also be classified as high-risk ones.
Accordingly, the list of high-risk AI systems would consist of four groups.
The first group includes AI systems that may affect human rights, including systems used to classify, rank or predict human characteristics, manage employment and labour, or conduct large-scale biometric identification.
The second group comprises AI systems likely to affect safety and security, such as systems ensuring the safety of products and goods, and those safeguarding national security and social order.
The third group contains AI systems operating in essential sectors closely linked to public interests. These systems operate or monitor critical infrastructure; facilitate medical diagnosis, treatment and healthcare resource allocation; facilitate access to education and assessment of learning outcomes; assess individuals’ financial risks, lending and insurance; and support officials in making administrative decisions or carrying out judicial support activities.
The last group encompasses AI systems with large-scale impacts or likely to cause difficult-to-remedy consequences. This includes systems implemented across two or more provinces or affecting more than 50,000 users, as well as systems that involve the use of sensitive personal data or data subject to high confidentiality requirements.
- (VLLF)
