ISO/IEC 23894 is one of the core standards developed by ISO/IEC JTC 1/SC 42 (Artificial Intelligence Subcommittee) and was released in 2023. This standard is based on the ISO 31000 risk management framework and combines the specificity of AI technology to propose a risk management methodology for the entire lifecycle of AI.
Its core objectives include:
Risk identification: Systematically identify potential risks in AI development, deployment, and maintenance.
Risk control: Provide actionable control measures to reduce the impact of risks on organizations and society.
Compliance support: helps organizations meet regulatory requirements related to AI worldwide (such as the EU Artificial Intelligence Act).
Trusted AI Construction: Enhance stakeholders' trust in AI systems and promote responsible application of technology.
scope of application
This standard applies to all organizations involved in the design, development, deployment, and operation of AI systems, including:
Technical developers: such as AI algorithm engineers and data scientists.
Enterprise managers need to ensure that AI applications comply with business objectives and ethical standards.
Regulatory agencies: as a reference for evaluating the compliance of AI systems.
Third party assessment agency: provides independent risk assessment and certification services.
(1) Risk management principles
ISO/IEC 23894 emphasizes that AI risk management should follow a series of principles, including but not limited to legality, transparency, accountability, traceability, robustness, and adaptability. These principles provide basic guiding principles for organizations in the process of AI risk management, ensuring that risk management activities comply with legal and regulatory requirements, can interpret and trace the behavior and decisions of AI systems, and ensure that AI systems have sufficient stability and flexibility in the face of various situations.
(2) Risk Management Framework
This standard proposes a comprehensive AI risk management framework that covers the following key aspects:
• Leadership and Commitment: Require senior management of the organization to clearly define their commitment to AI risk management, establish corresponding policies and goals, and provide necessary resource support for risk management activities.
• Integration: Emphasize the integration of AI risk management with the overall business processes and management system of the organization, ensuring that risk management runs through the entire lifecycle of the AI system.
• Design: including understanding the organization and its environment, clarifying risk management commitments, assigning organizational roles and responsibilities, allocating resources, and establishing communication and consulting mechanisms, laying the foundation for the implementation of AI risk management.
• Implementation: Organizations need to take specific action measures to address AI risks based on the established risk management plan, including risk assessment, risk management, monitoring, and review.
• Evaluation and improvement: Regularly evaluate the effectiveness of AI risk management, continuously adjust and improve risk management strategies and measures based on the evaluation results to adapt to changes in the internal and external environment of the organization and the development of AI technology.
(3) Risk management process
ISO/IEC 23894 elaborates on the specific process of AI risk management, which mainly includes the following steps:
• Risk identification: In the lifecycle of AI systems, comprehensively identify potential risk factors, such as technical risks (algorithm defects, data quality issues, etc.), business risks (data security, compliance issues, etc.), legal risks (privacy regulations, intellectual property issues, etc.), and personnel risks (insufficient employee training, weak ethical awareness, etc.).
• Risk assessment: Analyze and evaluate the identified risks, determine their likelihood of occurrence, and assess their impact on the organization and stakeholders. Usually, a combination of qualitative and quantitative methods, such as risk matrix analysis, probability and impact analysis, are used to provide a basis for the development of risk response strategies.
• Risk management: Based on the results of risk assessment, develop corresponding risk management strategies, including risk avoidance, risk transfer, risk acceptance, and risk mitigation. Organizations need to choose appropriate strategies based on specific circumstances and develop detailed risk management plans, specifying responsible individuals and timelines.
• Monitoring and Review: Continuously monitor the operation of AI systems and the effectiveness of risk management measures, regularly review the risk management process, promptly identify new risks or risk changes, and adjust risk management strategies and plans as needed.
• Recording and Reporting: Recording and reporting various information in the risk management process, including risk identification results, risk assessment reports, risk management plans, and implementation status, to provide support for internal decision-making within the organization while meeting the requirements of external regulators and stakeholders.
1. Compliance and risk management
Meet global AI regulations (such as the EU's Artificial Intelligence Act), avoid legal disputes and high fines; Systematically reduce risks such as algorithm bias and data leakage, and minimize reputation damage caused by AI accidents.
2. Enhance credibility and market competitiveness
By implementing transparent risk management processes and explainable AI design, we aim to enhance the trust of users, customers, and regulatory agencies in the system, and establish a responsible and innovative brand image.
3. Promote international cooperation and market access
Compliant with internationally recognized AI governance frameworks, it helps enterprises break through regional compliance barriers, expand into global markets, and has advantages in high regulatory fields such as finance and healthcare.
4. Optimize resources and technological innovation
By implementing full lifecycle risk management, we aim to reduce trial and error costs and repetitive investments, while providing structured support for the secure iteration of AI technology and unlocking its potential.
(I.) Application materials
• Basic organizational information: including the organization's name, address, contact information, etc.
• AI system description: Detailed description of the functions, application scenarios, technical architecture, etc. of the AI system.
• Risk Management Plan: Demonstrate how the organization identifies, evaluates, and addresses AI risks, including risk identification, risk assessment, risk management strategies, and more.
• Compliance Statement: Proving that the organization's AI system complies with relevant laws, regulations, and standard requirements.
• Internal Audit Report: Provide an audit report on AI risk management within the organization, demonstrating the implementation of risk management measures.
• Stakeholder communication records: Record communication with stakeholders (such as customers, suppliers, regulatory agencies, etc.) regarding AI risk management.
(II.) Application requirements
1. System establishment and operation
A risk management system covering the entire lifecycle of AI has been established based on ISO/IEC 23894 requirements and has been in operation for at least 3 months.
Complete at least one complete internal audit and management review to ensure the effectiveness of the system.
2. Organizational commitment
Commitment documents from senior management on AI risk management, such as policy statements.
Clearly define the division of responsibilities for risk management (such as establishing an AI ethics committee or risk management team).
3. Resource preparation
Have the technical capabilities required to implement risk management, such as data governance tools and model interpretability techniques.
The relevant employees have received training on ISO/IEC 23894 and AI ethics (training records are required).
4. Compliance foundation
The AI system complies with the laws and regulations of the country/region where it is located, such as GDPR and the Artificial Intelligence Act.
The following is a detailed process analysis of ISO/IEC 23894 certification:
Wechat ID:Siterui888888
Add a wechat friend to get free plans and quotations