Publication

10 July 2024

Artificial Intelligence and the Law: Recent Developments and Implications

Most Michigan-based companies and individuals currently operate in a business environment characterized by an absence of artificial intelligence (AI)-specific laws and regulations. However, the days of largely unregulated AI use may be ending as the legal landscape begins to catch pace. Those using or considering the use of AI in their day-to-day business operations and other activities are increasingly subject to both legislative and regulatory oversight emanating from a variety of jurisdictions and regulatory bodies.

The following provides a short summary of significant laws, regulations, and guidance that Michigan-based businesses and individuals may soon encounter (if they have not already):

Blueprint for an AI Bill of Rights: In October 2022, the White House Office of Science and Technology Policy (OSTP) proposed a set of principles and practices, known as the Blueprint for an AI Bill of Rights, to guide the design, use, and deployment of artificial intelligence systems. In doing so, the OSTP aims to protect the rights of the American public through adherence to a set of five principles: (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, consideration, and fallback.

President Biden’s Executive Order on AI: One year later, President Biden issued an executive order emphasizing responsible AI development. The order focuses on safety, security, and trustworthiness, with federal agencies adhering to principles, including safety, privacy, and fairness. The Executive Order directs federal agencies to take significant actions, such as:

  • conducting rigorous evaluations of AI systems to ensure safety and security;
  • standardizing testing procedures;
  • addressing security risks;
  • prioritizing equity and civil rights through responsible development;
  • bolstering privacy rights and protections;
  • supporting research grants to foster innovation; and
  • establishing a National AI research resource for researchers and students.

To promote responsible AI development and implementation across federal agencies, the Executive Order underscores the significance of designating points of contact for AI-related matters. Furthermore, specific requirements apply to government contractors concerning AI risk assessment, particularly in critical infrastructure, financial sectors, and operational AI pilot projects. This order may serve as a particularly relevant framework for U.S. clients as the document places special emphasis on transparency and accountability.

AI Risk Management Framework: The National Institute of Standards and Technology (NIST) provides ongoing guidelines for managing AI risks through its AI Risk Management Framework. The framework addresses safety, trustworthiness, and evaluation of AI systems and is useful for individuals navigating AI adoption and risk management, including AI governance, access and implementation policies.

EU’s Artificial Intelligence Act: On March 13, 2024, the European Union became the first to regulate AI comprehensively through the passage of the EU Artificial Intelligence Act. Intended to regulate particularly dangerous uses of artificial intelligence more strictly, the Act follows a risk-based approach, imposing stricter rules for higher-risk AI systems, such as those impacting health, education, employment and safety. It aims to foster safe and trustworthy AI while respecting fundamental rights. The Act is likely to set global standards for future AI legislation, both by influencing the policies of other jurisdictions as well as impacting international businesses with meaningful assets or operations in the EU.

Colorado Artificial Intelligence Act: On May 17, 2024, Colorado made history by becoming the first U.S. state to enact a law explicitly and systematically regulating AI. The Colorado Artificial Intelligence Act (CAIA), which overcame significant lobbying efforts of technology companies, aims to address development and deployment of certain AI systems. Focusing primarily on high-risk AI applications—those making critical decisions in areas like education, employment, financial services, healthcare, and legal matters—the CAIA has several aims. It seeks to combat algorithmic discrimination, enhance transparency and accountability, mandate risk assessments with mitigation strategies, promote human oversight, and impose obligations on both AI developers and deployers. It mandates companies using AI for consequential decisions disclose its purpose to consumers, job applicants, and others, in addition to requiring transparency about system testing for biases. The law may also serve as groundwork for future state and federal legislation, although even Colorado Governor Jared Polis noted that changes to the law would be necessary before it becomes fully effective in February 2026.

Michigan AI Election Bill: Lastly, while Michigan has yet to adopt comprehensive AI legislation, on November 30, 2023, Michigan enacted laws banning AI-generated deepfake political ads within 90 days of an election, unless the use of AI is clearly disclosed. While the law in its current form may be applicable to only a narrow group of actors, Michigan’s early adoption indicates a desire to be a leading state in AI regulation across multiple sectors. Michigan is among one of five states, including Texas, Minnesota, California, and Washington, that have enacted legislation to limit the usage of AI during elections. At least one additional bill has been introduced in the Michigan legislature to regulate AI, focusing on explicit “intimate” deepfake content.

Other States Engagement: Beyond the states that have already passed AI-related laws, seven others—New York, California, Washington, Texas, Louisiana, Illinois, and Connecticut—are actively reviewing and vetting comprehensive AI legislation. These states are addressing a range of topics, including compliance, accountability, development and deployment, and expanding privacy acts.”

As laws and regulations work to keep up with the pace of AI development, the legal landscape surrounding artificial intelligence continues to evolve at an increasing rate. Miller Johnson strongly encourages both businesses and individuals to stay informed of this dynamic AI regulatory environment. As always, contact your Miller Johnson attorney for personalized guidance on AI risk assessments, governance, policy implementation, and more. We remain committed to providing you with the most current and relevant information to help your business succeed in the age of artificial intelligence. If you have any questions please reach out out to one of the authors.