The Diverging Paths of AI Regulation

Milgrom Team

Share Post:

The Google Pixel 6 launched recently.  Among its many most anticipated features are new artificial intelligence and machine learning features (“AI”), including its speech translation and recognition, and its AI photo editing software.  Indeed, although the phone has significant hardware advancements, most commentators recognized that AI advancements were the driving factor behind its success.

This focus on the benefits of AI follows a longer societal trend where there is increasing recognition that AI has countless untapped benefits.  Whether it was Alphazero demonstrating new playing styles in chess,  dramatically improving efficiency in insurance writing and claims processing, or countries using facial recognition to monitor their citizens, AI is and will continue to fundamentally change the world.

However, countries’ recognition and reaction to AI has not been consistent.  Europe, as expected because of its strong stance on individual data rights and privacy, has launched the most aggressive and restrictive stance against the use of AI.  Back in 2018, the European Economic Area passed the General Data Protection Regulation, restricting the use of some automated decision making.  Additionally, in April 21, 2021, the European Union released a draft of the EU Artificial Intelligence Act, which further attempted to regulate AI through harmonized rules within the European Union.  The breadth of the proposed rules is broad – it applies to those outside of the EU that market or provide AI systems to the EU – and the definition of AI is broadly defined, encompassing processes which could reasonably be considered AI. 

The proposed rules separate AI into three different tiers of risk: unacceptable risk AI systems, high risk systems, and limited and minimal risk AI systems. Unacceptable risk AI systems such as social scoring or real time remote biometric identification systems are fully banned under the proposed laws.  High risk AI systems, including systems that evaluate a consumer’s creditworthiness or use biometric identification in non-public spaces require company oversight, including conducting audits that are similar to Data Protection Impact Assessments to ensure that the systems perform as intended and are secured. Low risk systems continue to have little oversight as the authorities are less concerned about potential abuses with this AI.

On the other end of the spectrum, China has fully embraced the use of AI.  Instead of worrying about any negative privacy implications, China has leaned on AI as a tool to build its society and government.   Among the uses getting most coverage are China’s use of facial recognition and other AI methods to keep tabs on its citizens, such as the use of AI emotion-detection software on Uyghurs. More generally however, China has woven AI into its social fabric by using it for everyday operations including its social credit system which monitors its citizens and rewards them or punishes them for things they do, its payment and communications systems, or even its defense systems.  This general acceptance for AI has been backed by formal legislation such as the ”Made in China 2025” plan or “Next Generation Artificial Intelligence Development Plan”.  The effect has been a boom in the research, use, and acceptance of AI (e.g., where as China previously lagged behind in AI research, China has now become the frontrunner.)  

Meanwhile, the United States stands in the middle of these two extremes.  Like the EU, the United States has agreed that AI should be used in ways that are “based on shared democratic values, including respect for human rights”.  Significantly, the U.S. and EU agreed that AI should not be used for social credit scoring.   However, the U.S. does not seem to share the EU’s concern over the other potentially invasive and threatening ways that AI could be used and has not publicly committed to a robust federal framework that addresses these AI issues. Instead, the U.S. appears to be concerned over the strategic and geopolitical issues that advanced AI will present, especially if other actors like China become world leaders.

Thus, because of the significant developments in AI and what those developments mean, all countries have been forced reckon with AI regulation.  However, geopolitical, historical, and other regulatory forces have created responses that dramatically differ throughout the world.  These responses have not only changed the trajectory of AI development in various parts of the world but also increasingly left out the possibility of harmonious AI regulation.  

For additional information, please contact us.

More Articles

Artificial Intelligence

Potential Issues and Liabilities of Using Generative AI for Legal Document Drafting 

In recent years, the legal industry has witnessed a significant transformation, with the integration of technology and artificial intelligence (AI) into various aspects of legal practice, and while it’s unlikely that AI will kill all the lawyers, one notable advancement is the use of large language models of generative AI to draft legal documents, even by non-lawyers. While this technology offers several advantages, such as increased efficiency and reduced costs, it also brings forth a host of potential issues and liabilities that both legal professionals and non-lawyers must carefully consider. In this article, we’ll explore these concerns and provide insights into mitigating associated risks.

Read More »
Business & Corporate Law

Oversold and Underwhelmed: Why the Ripple Decision Doesn’t Live Up to the Hype

If you follow the crypto space and read the headlines about the recent decision in SEC vs. Ripple Labs, Inc., you will be grossly disappointed by the delta between hype and reality. Crypto-promoters will tell you that Ripple “won,” that tokens are not securities, and that crypto can now go on to create the New Eden that will bring freedom and prosperity to everyone. Everyone except for the teeth-gnashing demons who work at the Securities and Exchange Commission, a.k.a. the Anti-Christ.

Read More »
Real Estate Law

Psychedelic Healing Centers in Colorado: Are Landlords Prepared?

In November 2022, Colorado voters approved Proposition 122, known as the Natural Medicine Health Act of 2022 (NMHA). This legislation decriminalized the personal use and possession of certain psychedelic substances, including psilocybin and psilocin mushrooms. Additionally, the NMHA established the legal foundation for healing centers – places where adults may consume and experience the effects of regulated natural medicines (such as mushrooms) under the supervision of licensed facilitators. Given the nascent stage of the psychedelic industry in Colorado, landlords and tenants to tread carefully in negotiating a commercial lease for space to be used as a healing center.

Read More »