Building trust in AI systems is everyone’s business. AI system providers and deployers must be prepared to assess their readiness to comply and align with the developing new AI regulatory requirements, emerging best industry practices, and societal expectations around the use of AI.
The sooner organizations plan for responsible innovation with AI, the smoother the transition into this new business environment with AI will be. Organizations that have already invested resources in building robust privacy, information security, ethics, and other trust programs will find it less challenging to account for the use of AI because they can build on their existing compliance program foundations.
How can organizations prepare for widespread use of AI as the EU’s AI Act comes into effect?
The EU AI Act outlines the requirements for developing and placing AI systems in the EU market, as well as for deploying and integrating them to ensure the necessary controls are put in place to enable users of AI systems to interact with them safely. Providers of AI systems will have to assess whether any AI systems within their offering fall under the classification of a high-risk AI system and then determine whether any exemptions, which would relieve them from certain compliance requirements, are applicable. For example, an exemption is carved out in the law for the financial institutions – which will apply to the governance and quality assurance requirement where organizations already have equivalent standards under other EU regulatory frameworks.
New requirements mandate providers of high-risk AI systems to assess their systems
to ensure that they are sufficiently tested for unwanted bias in the AI system training data. The challenge with this requirement is that there is little guidance on the specific requirements or suggested methodologies for bias testing in the current law, leaving the content of such an assessment open to interpretation. We should expect more guidance once the AI oversight authorities start the rulemaking process to fill these gaps.
Deployers of high-risk AI systems will have to be prepared to update their publicly facing disclosures confirming not only the use of AI systems but also outlining whether there is any personal/sensitive information processed in any stage of the AI system lifecycle, how it is processed and explaining how individuals can exercise their hybrid AI/Privacy-linked rights e.g., a right to opt-out from being subject to AI system. In furtherance of the ‘human-in-the-loop’ principle, the deployers of certain higher-risk AI systems will be required to appoint someone internally to be responsible for supervising day-to-day AI operations, including monitoring of AI systems e.g., identifying and handling AI incidents, to ensure that outcomes produced by such systems remain compliant with the adopted risk standards. Such an AI governance professional is expected to possess relevant special knowledge covering AI risk governance, AI program management, and AI compliance.
Other requirements in the EU AI act apply to the providers of foundation models, which are subject to their own regulatory framework. Depending on the technical sophistication of the model, requirements include i) producing technical documentation, ii) performing risk assessments, iii) developing and publishing public disclosures, and iv) introducing policies for compliance with copyright laws.
What are the common challenges that many organizations are facing as they bid to achieve compliance?
Staying on top of the rapid advancements related to AI technology is a challenge for compliance teams, particularly given the oftentimes overwhelming internal and external business pressures to adopt AI systems in order to stay ahead and reap the promised benefits. Tackling new AI governance challenges means that such teams must set priorities and work on a plan to address them first. Looking at the requirements list set in the EU AI Act, the key AI Governance compliance areas include:
- Assessing internal governance to account for the management of AI governance processes. The key consideration here is to ensure accountability for AI management within your organization. Setting up an AI Governance Committee responsible for steering the direction of the AI governance program, with oversight of the day-to-day AI governance operations, is the first step towards responsible AI governance.
- AI Policy development. Assess the existing internal policies, identify gaps, fill those gaps with new policies that account for AI use, and address new risks.
- Mapping out AI system inventories. This is a key priority for any organization exposed to AI use and starts with developing an AI system inventory which can provide visibility into the use of AI within the organization. It can also be used to develop a detailed strategy
to prepare the organization for full compliance. - Third Party-Risk Management (TPRM). Audit the current TPRM processes to make sure that new AI risks in all trust program areas, e.g., privacy, infosec, and ethics, are accounted for and integrated into the overall vendor management program.
- AI literacy. The EU AI Act sets this as one of the priority areas and a key guardrail in the furtherance of responsible AI deployments. Promoting AI risk awareness to everyone involved in the AI system lifecycle is key to responsible AI governance. Educating the
workforce on AI risks, promoting best practices, training the workforce to be responsible users of AI, and learning how to identify potential issues and report them if necessary.
High expectations around digital brains come with high risks. Policy makers worldwide have started to roll out new regulations aimed at identifying these risks, outlining regulatory schemes to tackle AI risk areas, and promoting responsible innovation that supports transparency, responsibility, and accountability in AI.
The key objective of the EU AI Act is to ensure that the risks around the use of AI are addressed and that we create conditions for reasonably safe adoption of AI systems. The new AI standards do not intend to create unnecessary red tape or slowdown of AI innovation. On the contrary, new requirements are designed to control the unacceptable levels of risks and potential adverse effects of the widespread adoption of AI. If we want to support the ethical and legal use of AI technology, all parties involved in the AI lifecycle, including developers, deployers, traders, and users of AI, must work together and build trust in AI systems.
By Adomas Siudika, Senior Privacy Counsel at OneTrust