Laws governing technology have historically focused on the regulation of information privacy and digital communications. However, governments and regulators around the globe have increasingly turned their attention to artificial intelligence (AI) systems. As the use of AI becomes more widespread and changes how business is done across industries, there are signs that existing declarations of principles and ethical frameworks for AI may soon be followed by binding legal frameworks. [1]
On June 16, 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). Although there have been previous efforts to regulate automated decision-making as part of federal privacy reform efforts, AIDA is Canadas first effort to regulate AI systems outside of privacy legislation. [2]
If passed, AIDA would regulate the design, development, and use of AI systems in the private sector in connection with interprovincial and international trade, with a focus on mitigating the risks of harm and bias in the use of high-impact AI systems. AIDA sets out positive requirements for AI systems as well as monetary penalties and new criminal offences on certain unlawful or fraudulent conduct in respect of AI systems.
Prior to AIDA, in April 2021, the European Commission presented a draft legal framework for regulating AI, the Artificial Intelligence Act (EU AI Act), which was one of the first attempts to comprehensively regulate AI. The EU AI Act sets out harmonized rules for the development, marketing, and use of AI and imposes risk-based requirements for AI systems and their operators, as well as prohibitions on certain harmful AI practices.
Broadly speaking, AIDA and the EU AI Act are both focused on mitigating the risks of bias and harms caused by AI in a manner that tries to be balanced with the need to allow technological innovation. In an effort to be future-proof and keep pace with advances in AI, both AIDA and the EU AI Act define artificial intelligence in a technology-neutral manner. However, AIDA relies on a more principles-based approach, while the EU AI Act is more prescriptive in classifying high-risk AI systems and harmful AI practices and controlling their development and deployment. Further, much of the substance and details of AIDA are left to be elaborated in future regulations, including the key definition of high risk AI systems to which most of AIDAs obligations attach.
The table below sets out some of the key similarities and differences between the current drafts of AIDA and the EU AI Act.
High-risk system means:
The EU AI Act does not apply to:
AIDA does not stipulate an outright ban on AI systems presenting an unacceptable level of risk.
It does, however, make it an offence to:
The EU AI Act prohibits certain AI practices and certain types of AI systems, including:
Persons who process anonymized data for use in AI systems must establish measures (in accordance with future regulations) with respect to:
High-risk systems that use data sets for training, validation and testing must be subject to appropriate data governance and management practices that address:
Data sets must:
Transparency. Persons responsible for high-impact systems must publish on a public website a plain-language description of the AI system which explains:
Transparency. AI systems which interact with individuals and pose transparency risks, such as those that incorporate emotion recognition systems or risks of impersonation or deception, are subject to additional transparency obligations.
Regardless of whether or not the system qualifies as high-risk, individuals must be notified that they are:
Persons responsible for AI systems must keep records (in accordance with future regulations) describing:
High-risk AI systems must:
Providers of high-risk AI systems must:
The Minister of Industry may designate an official to be the Artificial Intelligence and Data Commissioner, whose role is to assist in the administration and enforcement of AIDA. The Minister may delegate any of their powers or duties under AIDA to the Commissioner.
The Minister of Industry has the following powers:
The European Artificial Intelligence Board will assist the European Commission in providing guidance and overseeing the application of the EU AI Act. Each Member State will designate or establish a national supervisory authority.
The Commission has the authority to:
Persons who commit a violation of AIDA or its regulations may be subject to administrative monetary penalties, the details of which will be establish by future regulations. Administrative monetary penalties are intended to promote compliance with AIDA.
Contraventions to AIDAs governance and transparency requirements can result in fines:
Persons who commit more serious criminal offences (e.g., contravening the prohibitions noted above or obstructing or providing false or misleading information during an audit or investigation) may be liable to:
While both acts define AI systems relatively broadly, the definition provided in AIDA is narrower. AIDA only encapsulates technologies that process data autonomously or partly autonomously, whereas the EU AI Act does not stipulate any degree of autonomy. This distinction in AIDA is arguably a welcome divergence from the EU AI Act, which as currently drafted would appear to include even relatively innocuous technology, such as the use of a statistical formula to produce an output. That said, there are indications that the EU AI Acts current definition may be modified before its final version is published, and that it will likely be accompanied by regulatory guidance for further clarity. [4]
Both acts are focused on avoiding harm, a concept they define similarly. The EU AI Act is, however, slightly broader in scope as it considers serious disruptions to critical infrastructure a harm, whereas AIDA is solely concerned with harm suffered by individuals.
Under AIDA, high-impact systems will be defined in future regulations, so it is not yet possible to compare AIDAs definition of high-impact systems to the EU AI Acts definition of high-risk systems. The EU AI Act identifies two categories of high-risk systems. The first category is AI systems intended to be used as safety components of products, or as products themselves. The second category is AI systems listed in an annex to the act and which present a risk to the health, safety, or fundamental rights of individuals. It remains to be seen how Canada would define high-impact systems, but the EU AI Act provides an indication of the direction the federal government could take.
Similarly, AIDA also defers to future regulations with respect to risk assessments, while the proposed EU AI Act sets out a graduated approach to risk in the body of the act. Under the EU AI Act, systems presenting an unacceptable level of risk are banned outright. In particular, the EU AI Act explicitly bans manipulative or exploitive systems that can cause harm, real-time biometric identification systems used in public spaces by law enforcement, and all forms of social scoring. AI systems presenting low or minimal risk are largely exempt from regulations, except for transparency requirements.
AIDA only imposes transparency requirements on high-impact AI systems, and does not stipulate an outright ban on AI systems presenting an unacceptable level of risk. It does, however, empower the Minister of Industry to order that a high-impact system presenting a serious risk of imminent harm cease being used.
AIDAs application is limited by the constraints of the federal governments jurisdiction. AIDA broadly applies to actors throughout the AI supply chain from design to delivery, but only as their activities relate to international or interprovincial trade and commerce. AIDA does not expressly apply to intra-provincial development and use of AI systems. Government institutions (as defined under the Privacy Act) are excluded from AIDAs scope, as are products, services, and activities that are under the direction or control of specified federal security agencies.
The EU AI Act specifically applies to providers (although this may be interpreted broadly) and users of AI systems, including government institutions but excluding where AI systems are exclusively developed for military purposes. The EU AI Act also expressly applies to providers and users of AI systems insofar as the output produced by those systems is used in the EU.
AIDA is largely silent on requirements with respect to data governance. In its current form, it only imposes requirements on the use of anonymized data in AI systems, most of which will be elaborated in future regulations. AIDAs data governance requirements will apply to anonymized data used in the design, development, or use of any AI system, whereas the EU AI Acts data governance requirements will apply only to high-impact systems.
The EU AI Act sets the bar very high for data governance. It requires that training, validation, and testing datasets be free of errors and complete. In response to criticisms of this standard for being too strict, the European Parliament has introduced an amendment to the act that proposes to make error-free and complete datasets an overall objective to the extent possible, rather than a precise requirement.
While AIDA and the EU AI Act both set out requirements with respect to assessment, monitoring, transparency, and data governance, the EU AI Act imposes a much heavier burden on those responsible for high-risk AI systems. For instance, under AIDA, persons responsible for such systems will be required to implement mitigation, monitoring, and transparency measures. The EU AI Act goes a step further by putting high-risk AI systems through a certification scheme, which requires that the responsible entity conduct a conformity assessment and draw up a declaration of conformity before the system is put into use.
Both acts impose record-keeping requirements. Again, the EU AI Act is more prescriptive, but contrary to AIDA, its requirements will only apply to high-risk systems, whereas AIDAs record-keeping requirements would apply to all AI systems.
Finally, both acts contain notification requirements that are limited to high-impact (AIDA) and high-risk (EU AI Act) systems. AIDA imposes a slightly heavier burden, requiring notification for all uses that are likely to result in material harm. The EU AI Act only requires notification if a serious incident or malfunction has occurred.
Both AIDA and the EU AI Act provide for the creation of a new monitoring authority to assist with administration and enforcement. The powers attributed to these entities under both acts are similar.
Both acts contemplate significant penalties for violations of their provisions. AIDAs penalties for more serious offences up to $25 million CAD or 5% of the offenders gross global revenues from the preceding financial year are significantly greater than those found in Quebecs newly revised privacy law and the EUs General Data Protection Regulation (GDPR). The EU AI Acts most severe penalty is higher than both the GDPR and AIDAs most severe penalty: up to 30 million or 6% of gross global revenues from the preceding financial year for non-compliance with prohibited AI practices or the quality requirements set out for high-risk AI systems.
In contrast to the EU AI Act, AIDA also introduces new criminal offences for the most serious offences committed under the act.
Finally, the EU AI Act would also grant discretionary power to Member States to determine additional penalties for infringements of the act.
While both AIDA and the EU AI Act have broad similarities, it is impossible to predict with certainty how similar they could eventually be, given that so much of AIDA would be elaborated in future regulations. Further, at the time of writing, Bill C-27 has only completed first reading, and is likely to be subject to amendments as it makes its way through Parliament.
It is still unclear how much influence the EU AI Act will have on AI regulations globally, including in Canada. Regulators in both Canada and the EU may aim for a certain degree of consistency. Indeed, many have likened the EU AI Act to the GDPR, in that it may set global standards for AI regulation just as the GDPR did for privacy law.
Regardless of the fates of AIDA and the EU AI Act, organizations should start considering how they plan to address a future wave of AI regulation.
For more information on the potential implications of the new Bill C-27, Digital Charter Implementation Act, 2022, please see our bulletin,The Canadian Government Undertakes a Second Effort at Comprehensive Reform to Federal Privacy Law, on this topic.
[1]There have been a number of recent developments in AI regulation, including the United Kingdoms Algorithmic Transparency Standard, Chinas draft regulations on algorithmic recommendation systems in online services, the United States Algorithmic Accountability Act of 2022, and the collaborative effort between Health Canada, the FDA and the United Kingdoms Medicines and Healthcare Products Regulatory Agency to publish Guiding Principles on Good Machine Learning Practice for Medical Device Development.
[2]In the public sphere, the Directive on Automated Decision-Makingguides the federal governments use of automated decision systems.
[3]This prohibition is subject to three exhaustively listed and narrowly defined exceptions where the use of such AI systems is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks: (1) the search for potential victims of crime, including missing children; (2) certain threats to the life or physical safety of individuals or a terrorist attack; and (3) the detection, localization, identification or prosecution of perpetrators or suspects of certain particularly reprehensible criminal offences.
[4]As an indication of potential changes, the Slovenian Presidency of the Council of the European Union tabled a proposed amendment to the act in November 2021 that would effectively narrow the scope of the regulation to machine learning.
The rest is here:
- AI File Extension - Open . AI Files - FileInfo [Last Updated On: June 14th, 2016] [Originally Added On: June 14th, 2016]
- Ai | Define Ai at Dictionary.com [Last Updated On: June 16th, 2016] [Originally Added On: June 16th, 2016]
- ai - Wiktionary [Last Updated On: June 22nd, 2016] [Originally Added On: June 22nd, 2016]
- Adobe Illustrator Artwork - Wikipedia, the free encyclopedia [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- AI File - What is it and how do I open it? [Last Updated On: June 29th, 2016] [Originally Added On: June 29th, 2016]
- Ai - Definition and Meaning, Bible Dictionary [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- ai - Dizionario italiano-inglese WordReference [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- Bible Map: Ai [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai dictionary definition | ai defined - YourDictionary [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai (poet) - Wikipedia, the free encyclopedia [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- AI file extension - Open, view and convert .ai files [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- History of artificial intelligence - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Artificial intelligence (video games) - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- North Carolina Chapter of the Appraisal Institute [Last Updated On: September 8th, 2016] [Originally Added On: September 8th, 2016]
- Ai Weiwei - Wikipedia, the free encyclopedia [Last Updated On: September 11th, 2016] [Originally Added On: September 11th, 2016]
- Adobe Illustrator Artwork - Wikipedia [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- 5 everyday products and services ripe for AI domination - VentureBeat [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI and the Ghost in the Machine - Hackaday [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Who Leads On AI: The CIO Or The CDO? - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- AI is here to save your career, not destroy it - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Microsoft AI's next leap forward: Helping you play video games - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- 3 common jobs AI will augment or displace - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk endorse new AI code - Irish Times [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- SumUp co-founders are back with bookkeeping AI startup Zeitgold - TechCrunch [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Five Trends Business-Oriented AI Will Inspire - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI Systems Are Learning to Communicate With Humans - Futurism [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Pinterest uses AI and your camera to recommend pins - Engadget [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Chinese Firms Racing to the Front of the AI Revolution - TOP500 News [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI could transform the way governments deliver public services - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Amazon Is Humiliating Google & Apple In The AI Wars - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- What's Still Missing From The AI Revolution - Co.Design (blog) [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Can AI make Facebook more inclusive? - Christian Science Monitor [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- AI and the end of truth - VentureBeat [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Taser bought two computer vision AI companies - Engadget [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- The Coming AI Wars - Huffington Post [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Is President Trump a model for AI? - CIO [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Who will have the AI edge? - Bulletin of the Atomic Scientists [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- How an AI took down four world-class poker pros - Engadget [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- We Need a Plan for When AI Becomes Smarter Than Us - Futurism [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- See how old Amazon's AI thinks you are - The Verge [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Zero One: Are You Ready for AI? - MSPmentor [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Ford pledges $1bn for AI start-up - BBC News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How to Keep Your AI From Turning Into a Racist Monster - WIRED [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Humans engage AI in translation competition - The Stack [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Cryptographers Dismiss AI, Quantum Computing Threats - Threatpost [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Is AI making credit scores better, or more confusing? - American Banker [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI and Robotics Trends: Experts Predict - Datamation [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- IoT And AI: Improving Customer Satisfaction - Forbes [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Facebook Push Into Video Allows Time To Catch Up On AI Applications - Investor's Business Daily [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Defining AI, Machine Learning, and Deep Learning - insideHPC [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI Predicts Autism From Infant Brain Scans - IEEE Spectrum [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- The Rise of AI Makes Emotional Intelligence More Important - Harvard Business Review [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI faces hype, skepticism at RSA cybersecurity show - PCWorld [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism [Last Updated On: February 17th, 2017] [Originally Added On: February 17th, 2017]