Personal and data privacy have been on the radar since the internet became a way of life. Good companies make real efforts to protect their users and customers – because it’s the right thing to do and because, increasingly, the law mandates such protections. In Canada, the proposed Bill C-27 sets out new rules about consumer privacy in the Consumer Privacy Protection Act and enforcement provisions in the Personal Information and Data Protection Tribunal Act, which I’ve covered in previous articles.
Bill C-27 includes a third provision, the Artificial Intelligence and Data Act (AIDA), to address the increasing use of AI-based systems. AI presents a new (and still emerging) challenge for enterprises and legislators to protect consumers from the risks of harm and biased output. AIDA specifies that it addresses “high-impact AI systems” that can have a significant effect on the world.
As many jurisdictions worldwide are exploring exactly how to monitor and regulate AI, Canada is joining the race to regulate AI with this new bill. If you do business in Canada, it’s worthwhile to study the new law and to understand how your AI initiatives may impact your commitments to privacy and protection and trust.
AI’s potential – and AI’s potential problems
AI is making headlines these days, and for good reason: It has the potential to reinvent all kinds of business processes and create entirely new value chains. But it also presents some real problems concerning privacy and illegitimate use.
Consider the example of a U.S.-based facial recognition company, which came under fire for using a massive database of more than three billion images. The images were all obtained without consent, and the company did not have a lawful reason for gathering or using the photos. The company was fined in the United Kingdom. In three provinces across Canada, the company stopped offering facial recognition services and collecting images without consent and deleted all the images in response to pressure from the Privacy Commissioner of Canada.
The issues outlined above might seem like an obvious invasion of privacy and misuse, but it’s easy to imagine more innocuous but still problematic applications of AI. Let’s say you’re planning to use artificial intelligence to automate résumé screening for new hires. The system can scan and “digest” résumés much faster than a human screener and, at least in theory, much more objectively. But if you cannot be confident that the algorithm does not reflect an inherent bias (which usually stems from a flaw in the source data used to train the system), you cannot be confident that the system’s output does not represent a harm.
Obviously, a biased hiring system is bad for job seekers and your business, by limiting the pool of potential candidates and potentially perpetuating existing marginalization. The new law aims to mitigate the risks of harm in AI systems and enable the Canadian government to create new rules for compliance and new tools for enforcement, including legal actions and monetary fines.
Under the rules of AIDA, you must be able to explain what kind of information you’re gathering, how you collected it, and how your algorithm works. You must also allow individuals to request access to that information. In addition, you must be prepared to demonstrate that your algorithm is not biased. In other words, your AI strategy must be transparent. Transparency also extends to the idea of training data, to ensure that it’s unbiased and collected legally. Note that Bill C-27 also makes it your responsibility to ensure that any third-party vendors you use for AI applications are also in compliance with the new law.
AIDA also calls for the creation of a new government entity, the “Artificial Intelligence and Data Commissioner,” to monitor compliance, order audits, and share information with relevant governing bodies. This new entity will also be able to levy substantial penalties: Violations are punishable by fines of up to CAN$25 million or 5% of global annual revenue (whichever is higher). And companies or individuals who mislead or obstruct the government about violations may face fines of CAN$10 million or 3% of global annual revenue. The law also proposes prison terms for individuals who use AI systems unlawfully.
What you can do to prepare
Although AI-based applications are exploding, the rapid evolution of the technology means that AI legislation is very much a work-in-progress. The AIDA component of Canada’s Bill C-27 could change before it’s passed. Still, given the stakes, both in terms of your responsibilities to your users and customers and the potential legal consequences of non-compliance, now is a good time to review how your company is using, or planning to use, artificial intelligence.
You should ensure you have a deep understanding of the algorithms you are using (or your third-party vendors) so that you can identify potential risks. You should also ensure that your data collection, methodologies, and algorithms can demonstrate a high degree of transparency, be monitored for compliance, and be protected.
OneTrust can help with our Algorithmic Impact Assessments, Third-Party Risk Management, and the recently introduced AI Governance solution. Our longstanding solutions for compliance automation can make a substantial difference in how you comply with emerging AI laws like Bill C-27. Good for your customers; good for your business.
To see how OneTrust can help you prepare for the reality of Bill C-27, request a free trial today.