Download the full infographic: EU AIA Conformity Assessment: A step-by-step guide
The other determining factor is your role in the system. If you’re the provider of the system or a responsible actor, then you are responsible for conducting the CA. Usually the provider and responsible actor will be the same party, but there are certain exceptions where the responsible actor is someone else.
Responsible actors may be a distributor, importer, deployer, or other third party that puts AI to use under their name or trademark. The exact legal requirements for if or when someone other than the provider would be required to perform the CA haven’t yet been set.
When should I do a conformity assessment?
Once you’ve determined that a CA is required, it becomes a matter of when. There is a correct timeline of when conformity assessments should be done:
- Ex ante (before the event): The CA has to be conducted before the AI system gets placed on the EU market – meaning before making it available for public use.
- Ex post (after the event): After a high-risk system has been placed on the market, a new CA is required when/if the system undergoes substantial modifications. It’s worth noting this doesn’t mean that as the system continues to learn (as models do) after being placed on the market; rather, it refers to any changes that would significantly affect the system’s compliance with the requirements measured by the CA.
Who actually conducts conformity assessments?
Conformity assessments can be conducted in two ways: either internally, or by a third-party process. As the names suggests, internal CAs are conducted by the provider (or the responsible actor) while third-party CAs are conducted by an external “notified body.”
Article 43 of the AI Act lays out more explicit guidance on which cases require an internal CA and which ones should go through a third-party process; you can also find more detailed steps for each process in this step-by-step guide.
Putting conformity with requirements for high-risk systems into practice
As mentioned earlier, CAs aim to verify that high-risk systems comply with all seven requirements – risk management, data governance, technical documentation, record keeping, transparency obligations, human oversight, and accuracy, robustness, and cybersecurity – laid out in the AI Act.
Unless specified otherwise, all these requirements should be met before the AI system is put into use or enters the market. Once the system is in use, the provider must also ensure continuous compliance throughout the system’s lifecycle.
When evaluating these requirements, the intended purpose of the use of the system needs to be taken into account, as does the reasonably foreseeable misuse of that system.
Use this guide to dive deeper into each of the different requirements and to see how compliance and implementation plays out for each of them.