I recently asked ChatGPT to write me a Haiku on snowboarding. This is what it came up with:
Powder on the slope
Board beneath my feet gliding
Winter’s joy in flight
Clearly, a kindred spirit! Perhaps life under our new AI overlords won’t be so terrible after all!
Jokes aside, even if we never reach the General AI Singularity, the pervasive role AI plays in life will still present enormous technical, legal and moral challenges. How those challenges play out is anyone’s guess, but we can get a sense of how society might come to terms with AI by looking at the initiatives underway in places like the European Union, Singapore and the UAE.
As was the case with the EU’s GDPR, we can expect that the EU’s draft Artificial Intelligence Act (AIA) will heavily influence AI regulations around the world, especially given that much of the non-business risk associated with AI relates to data privacy issues. Some key features of the AIA that we are likely to see replicated in other jurisdictions include:
Extraterritorial effect and application to the entire AI supply chain – the AIA will have extraterritorial effect and will apply to businesses (the concept includes developers as well as distributors, importers, users and third parties) where the AI system itself or a user of the AI system is based in the EU or the output produced by the AI system is used in the EU. The AIA will also impose obligations on third party users of AI systems. In other words, outsourcing your AI system is not going to absolve you if those outsourced services are provided by non-compliant AI tech.
Potentially severe penalties for non-compliance – the AIA provides for fines as high as EUR 30 million or up to six percent (6%) of the global annual turnover of the non-compliant business. For business owners and managers wanting to make use of AI, especially tech-heavy startups, it will be critically important to understand where their AI is used, how their AI works, and how third-party service providers use their AI.
Risk Based Approach to AI classification – the AIA categorises AI systems as (1) unacceptable risk AI systems; (2) high risk AI systems and (3) limited- and minimal-risk AI systems.
Unacceptable risk AI systems – includes all forms of social scoring, real-time remote biometric identification systems used in public spaces for law enforcement purposes and subliminal, manipulative or exploitative techniques causing harm.
High risk AI systems – includes safety-critical systems (such as medical devices, regulation of electricity, water, gas systems), access to education and assessing students, evaluating credit-worthiness, employment (recruitment/termination) related matters, specific systems used in the administration of justice.
Limited- and minimal-risk AI systems – includes AI chatbots, spam filters, inventory management, AI enabled video games etc.
Compliance and AI Licensing
The AIA in its current form requires businesses to prepare and implement documentation/processes in relation to unacceptable and high risk AI systems. These include developing and implementing a risk management system, quality assurance system, monitoring system, data governance and monitoring system, provisions for human oversight, technical documentation (including use instructions), systems to ensure accuracy, robustness and cybersecurity, systems for the assessment of conformity, systems for post-market monitoring, and processes for record keeping and logging.
Many of these governance concepts align with the Model AI Governance Framework (MAGF) published by Singapore’s Personal Data Protection Commission (PDPC) which, like the AIA, also adopts an industry and technology agnostic approach in favour of a use based governance model. The PDPC’s MAGF encourages:
strong internal governance arrangements with designated roles and responsibilities;
active human engagement in the operation of the AI system and AI decision making processes;
operations management to avoid data bias and requires regular tuning of the AI system; and
stakeholder engagement allowing for feedback and user disclosure.
In both the AIA and the MAGF we see a clear recognition that AI risk needs to be actively managed by AI system providers and users, with robust procedures in place to ensure appropriate use and oversight. The MAGF helpfully provides a meta-list of AI governing principles that are emerging around the world that will undoubtedly inform future AI regulations, these are:
Human Centricity and Wellbeing
Human rights alignment
As regulators move to exercise greater oversight and control of AI systems, we may see licensing regimes develop that will require AI system providers and users to demonstrate their compliance with many of these principles. As a case in point, in the UAE, many of these same concepts are incorporated in Digital Dubai’s published list of AI principles and ethical guidelines.
Establish Responsible AI Policies and Procedures Now
The speed at which AI is being integrated into our commercial environment means that all businesses need to anticipate their future use of AI systems and the legal and moral obligations that will flow from that. Inevitably, we will have links to responsible AI statements appearing at the bottom of websites together with the license details of the AI systems used by the business in question.
Contracts will also adjust for AI use. Just as we see contract provisions relating to anti-bribery, anti-slavery, data-privacy, etc., it will soon be common place to see contracting parties asking for representations and warranties that any AI systems used in the services provided are compliant with all relevant laws and regulations and as well as a range of ethical guidelines and principles. As with the liabilities for anti-slavery, anti-bribery, etc, the market will generally resist liability caps on the claims for breaches of AI representations and warranties, and AI failures generally.
Any business that uses an AI system (either directly or via a third-party service provider) should already be giving serious thought to establishing robust compliance policies and procedures to meet current and future legal and ethical expectations and to protect themselves from counterparties that fail to do so.
For further information, contact Glen Falting.