The Ministry of Science and Technology to define high-risk AI categories

The National Institute of Digital Technology and Digital Transformation is currently consulting relevant units under the Ministry of Science and Technology to draw up a list of high-risk artificial intelligence systems.

The National Institute of Digital Technology and Digital Transformation is currently consulting relevant units under the Ministry of Science and Technology (MOST) to draw up a list of high-risk artificial intelligence (AI) systems. This initiative aims to ensure that AI-related regulations are effective and tailored to specific requirements of each sector.

Under the proposed framework, the above units would base themselves on the draft criteria on high-risk AI systems to make their recommendations, covering the name of AI systems (defined by their intended use or purpose) and the scope and conditions under which such systems should be classified.

In the healthcare sector, for instance, authorities may propose the inclusion of "AI systems that recommend treatment plans or decide on invasive medical procedures" to the list of high-risk AI system, but only under the condition that "the sytems directly decide on or perform the procedures without independent clinical oversight by medical practitioners".

define high-risk AI categories

The formulation of the list of high-risk AI systems constitutes an important part of implementing the AI Law, under which AI systems are grouped into three risk levels: high, medium and low. Providers are expected to initially self-classify their products based on forthcoming technical guidelines, with more stringent oversight provisions applicable to high-risk AI systems.

Previously, MOST has proposed a draft decree that suggests that AI systems are deemed high-risk if they pose a potential threat to life, health, property, human rights, or national security. Factors such as the automation level and role of AI systems in the process of making final decisions, and the extent of human oversight and intervention in system operations are also taken into account.

The draft identifies several critical sectors where AI errors could have significant consequences, including healthcare, education, recruitment and employment, finance and banking, transport, energy, justice, critical technical infrastructure, and public administration and services. The scale of impact, such as the number of users or connection with important infrastructure systems, is another consideration.

However, not all AI systems meeting the prescribed criteria would be labelled high-risk. Exemptions may apply to systems performing purely technical functions, such as data collection, processing, classification or quality improvement, where they do not directly affect rights or interests of entities or individuals.

Systems incorporating substantive human oversight may also be excluded from the list, provided authorised persons can independently review, intervene in or reject AI-generated decisions before they take effect. Similarly, internal corporate systems with no external impacts or those offering analysis, forecasts or recommendations for reference only, may fall outside the high-risk category.

- (VLLF) 

Rate:
(0 rated)
For further support, please call 19006192

SAME CATEGORY

See more