The growth of global AI legislation means the time to invest in AI governance is now

Lawmakers around the world are responding to GenAI risk with new requirements for companies. Learn about key laws and why AI governance is essential to ensure you comply.

Adam Roberts

Written by

Adam Roberts

Reviewed by

Published:

December 17, 2024

Last updated:

The growth of global AI legislation means the time to invest in AI governance is now

Finding it hard to keep up with this fast-paced industry?

Subscribe to FILED Newsletter.  
Your monthly round-up of the latest news and views at the intersection of data privacy, data security, and governance.
Subscribe Now

For many, Zillow is the website you browse in your free time, idly imagining yourself starting your journey to property mogul with a mansion in Tuscany or a fixer-upper in Tucson. But if you browsed the site in the midst of the COVID-19 pandemic, and found yourself making a bid, you may have been competing with an algorithm.

In 2021, the company deployed what it called Offers, a home-buying division that would use tailored algorithms to find underpriced houses it could buy, improve, and then flip for a profit. The effort was at one point a major plank in the company’s strategy – CEO Richard Barton aimed to buy 5,000 homes a month by 2024. The problem? The models couldn’t account for the complexities of a volatile market, there was insufficient oversight, and as a result, the company was consistently overpaying for homes.

The impact for Zillow: the company had to write off $304 million in Q3 2021, close its Zillow Offers home buying division, and lay off 25% of its workforce. The company had about 7000 homes it needed to unload at a loss.

In the three years since the Zillow Offers debacle, machine learning and Generative AI (GenAI) have grown more sophisticated and more widely deployed. Whether a commercial model like Gemini or ChatGPT, or an in-house implementation, every organization now faces pressure to incorporate GenAI into their strategy. But if a large technology company like Zillow struggled with implementation and governance for a (presumably) simpler machine learning model, advances in technology have not been accompanied by improvements in governance.

The rise of GenAI brings elevated risk

Indeed, since its widespread deployment in 2022, GenAI has seen its fair share of mistakes and breaches of trust. To pick a few:

  • ChatGPT’s hallucinations drew a privacy complaint from an EU resident. He says the generative AI model guessed his birthday, which breaches the General Data Protection Regulation’s (GDPR) principle of accuracy, and the right to correct inaccurate information.

The risk from GenAI can come at all points in the technology's development and production lifecycle. Research shows only 24% of GenAI initiatives are being secured, which threatens to expose the data and models to breaches.  

Whether it is personally identifiable information (PII) in the training data, PII in the corporate data sets, or GenAI output being of poor quality or malicious, there is significant risk inherent in using this technology. The question is how to govern AI to remove as much risk as possible at all stages.

Fortunately, the governments of the world have not stood still while this revolution has occurred and have been busy working on their own answers to this question. Building on the progress in privacy legislation over the last decade, nations across the globe have begun work on regulating this new technology.

Wherever your business is based, your leaders and lawmakers are considering their response to this AI revolution. Some countries have moved faster than others and established themselves as models when it comes to AI laws and regulation.

Let’s review how they are doing, and what may be ahead of us in the AI regulatory landscape. We’ll start with a review of AI laws and regulations in a handful of key regions and countries, before moving to consider what we can expect in 2025, and how organizations should respond.

On that last point, no matter where you are based, the time to address this is now. Even if the government in your jurisdiction has yet to address AI regulation, you need to be proactive. Like privacy and cybersecurity, we need to integrate privacy, security, and ethical considerations into the development and deployment of AI systems from the very beginning, rather than waiting for a Zillow-sized catastrophe.

Key AI laws and regulation

As we’ve discussed, when it comes to AI governance and some countries have taken the reins early. Let’s take a world tour, starting with the leaders and moving to those whose efforts are less developed.

The European Union: a “world-first” approach to regulating AI

Just like the General Data Protection Regulation (GDPR) did for privacy before it, the EU's AI Act, finalized in December 2023, aims to set a global standard for regulating AI. The Act sets harmonized rules for AI in the EU market, focusing on a risk-based approach. It bans certain AI systems and sets transparency rules, alongside earlier communications on AI strategy from 2018. In addition to this, the Act:

  • Applies to the EU and any third-country providers and deployers that place AI systems on the EU market.  
  • Centers around a risk-based approach.  
  • Prohibits use of certain AI systems and provides specific requirements for high-risk systems.  
  • Creates harmonized transparency rules for certain AI systems.

The Act went into force on August 1, 2024, though certain provisions will apply at different dates; notably, penalties will apply from August 2, 2025, with the exception of fines for providers of General-Purpose AI models. For more on the AI Act, and how to ensure your organization complies with the law, read our article on the subject.

The United Kingdom: a flexible, sector-based approach

So far, the UK has adopted a flexible, context-based approach to AI regulation, leveraging existing sectoral laws to impose guardrails on AI systems. It offers resources like the AI Standards Hub, guidance on AI ethics, and frameworks for public sector AI use and auditing.

Striking a balance between encouraging AI development and maintaining ethical oversight, the government has made the following resources available for policy guidance:  

  • AI Standards Hub, a new U.K. initiative dedicated to the evolving and international field of standardization for AI technologies.  

The United States: a patchwork of AI governance initiatives

For those tracking the evolution of privacy regulations in the US, the story here will feel familiar: a mix of legislative acts, executive orders, and voluntary frameworks for AI governance, key laws and bills covering certain aspects of AI governance, with state-specific laws filling the gaps. The government has also promoted AI trust through frameworks like the AI Bill of Rights and the NIST AI Risk Management Framework.

The US has released numerous frameworks and guidelines. Congress has passed legislation to preserve US leadership in AI research and development, as well as control government use of AI.  

In May 2023, the Biden-Harris administration updated the National AI Research and Development Strategic Plan, emphasizing a principled and coordinated approach to international collaboration in AI research.  

The Office of Science and Technology Policy has issued a request for information to obtain public input on AI's impact. The National Telecommunications and Information Administration sought feedback on what policies can create trust in AI systems through an AI Accountability Policy Request for Comment.  

Specific AI governance law and policy includes

Executive orders
  • Maintaining American Leadership in AI  
  • Promoting the Use of Trustworthy AI in the Federal Government  
  • The Safe, Secure, and Trustworthy Development and Use of AI  
Acts and bills
  • AI Training Act [IN FORCE]  
  • National AI Initiative Act (Division E, Sec. 5001) [IN FORCE]  
  • AI in Government Act (Division U, Sec. 101) [IN FORCE]  

State-specific AI legislation

As with privacy law, while the federal government establishes a legal framework, the states won’t wait. Colorado, Utah, and California have passed their own bills regulating the use of AI, while Illinois, Massachusetts, and Ohio have bills in committee. California is a particularly important case, given the state is where many of the technology companies behind the models are based. The state has passed a raft of AI laws related to AI transparency in 2024, though Governor Gavin Newsom vetoed a bill relating to the regulation of frontier AI models.

In addition to formal laws, state bodies have provided their own advice and guidelines, for example, the New York Department of Financial Services provided AI cybersecurity guidance, focused on cybersecurity incidents that can arise from AI, as well as strategies to mitigate related risks in light of AI advancements and increased reliance on AI.

Australia: a proactive approach to establish the country as a global leader

The Australian federal government has taken a proactive approach to AI regulation, which has focused on updating and enhancing existing frameworks. An AI Action Plan (2021) has promoted trusted, secure AI, with the government saying its aim was to establish the country as “a global leader in developing and adopting trusted, secure and responsible AI”.

Safe and responsible AI in Australia

In 2023, the Department of Industry, Science, and Resources released a discussion paper on safe AI (RecordPoint’s submission).

The government provided an interim response in January 2024, highlighting 10 existing legislative frameworks that would require amendments to accommodate GenAI, as well as the fact that existing laws likely do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur.  

The federal government committed to:

  • Consider mandatory safeguards for those who develop or deploy AI systems in legitimate, high-risk settings
  • Consider possible legislative vehicles for introducing mandatory safety guardrails for AI in high-risk settings in close consultation with industry and the community
  • Consider specific obligations for the development, deployment and use of frontier or general-purpose models, and collaborate with international partners to establish safety mechanisms and testing during the AI product lifecycle, noting that models developed overseas can be built into applications in Australia.  

Canada

Canada is advancing the AI and Data Act (Bill C-27) to regulate high-risk AI systems, protect human rights, and ensure responsible AI development. The government also issued a code of practice for GenAI and a Directive on Automated Decision-Making for federal use.

AI and Data Act (AIDA)

Canada's anticipated AI and Data Act, part of Bill C-27, is intended to protect Canadians from high-risk systems, ensure the development of responsible AI, and position Canadian firms and values for adoption in global AI development. The AIDA would:  

  • Ensure high-impact AI systems meet existing safety and human rights expectations.  
  • Prohibit reckless and malicious uses of AI.  
  • Empower the Minister of Innovation, Science and Industry to enforce the act.
  • Canada published a code of practice for generative AI development and use in anticipation of, and to assure compliance with, the AI and Data Act.  

What to watch for in 2025: Common themes in AI regulation

Generative AI is a young, disruptive technology, and we’re still early in regulating it. But key trends are visible, and when we look ahead to the future, there are a few obvious themes to pick out.

The rise of risk-based approaches

Countries like the EU and Canada are adopting a risk-based approach to AI regulation, classifying AI systems by their potential impact on safety and human rights. Expect more countries to follow suit as they seek to regulate high-risk AI applications while promoting innovation in lower-risk sectors.

Privacy and data protection at the core

AI governance is heavily influenced by privacy regulations, with frameworks like GDPR and CCPA setting the standard for data protection.  

Heading into 2025, privacy by design and data minimization will continue to be major pillars of AI regulation, ensuring that AI systems respect individual rights and comply with global privacy laws.

Ethical AI standards take center stage

The push for ethical AI is only intensifying, with more countries focusing on preventing algorithmic biases and ensuring equitable outcomes. Transparency, accountability, and fairness are becoming non-negotiable in the development of AI models.

In 2025, we will likely see further moves toward standardizing ethical AI practices, along with the creation of more AI ethics frameworks.

Sector-specific vs. unified regulation

A clear division has emerged between states that favor broad, cross-sector regulation (such as the EU AI Act), and countries like the UK are adopting more flexible, sector-specific approaches.  

As 2025 approaches, expect a debate over the benefits of comprehensive regulations versus industry-specific standards, with both approaches likely to coexist in many regions.

Global collaboration and harmonization

While states debate the right way to regulate AI, they will also need to cooperate and establish globally aligned standards to reduce complexity.  

In 2025, governments will increasingly recognize the need for cross-border regulatory alignment to ensure consistency and reduce compliance complexity for businesses operating in multiple jurisdictions.

How RecordPoint can help

Great AI governance begins with solid data management, and RecordPoint is perfectly positioned to help organizations build a foundation for responsible, secure, and compliant AI use. Here's how:

  • Data discovery and classification: RecordPoint’s automated data discovery and classification help organizations identify, categorize, and manage the data that powers AI systems. This ensures only relevant, accurate, and appropriately classified data is used, reducing the risks of biased or non-compliant AI outputs.
  • Data minimization and compliance: With AI regulation often emphasizing data privacy (e.g., GDPR, CCPA), RecordPoint enables data minimization by ensuring that only the necessary data is collected and retained. This supports compliance with privacy laws while protecting sensitive data.
  • Privacy and sensitivity controls: RecordPoint provides robust data protection features that ensure sensitive and personal data is securely managed, aligning with privacy-by-design principles to safeguard privacy throughout the AI lifecycle.
  • Security and Intellectual Property protection: Only 24% of Generative AI initiatives are secured, risking data breaches and exposure of sensitive information. RecordPoint helps safeguard intellectual property and sensitive data by enforcing security controls and access management within the platform, ensuring AI systems are built on secure, trusted data.
  • Auditability and transparency: By maintaining a transparent and auditable record of all data actions, RecordPoint supports organizations in meeting AI regulatory requirements for transparency and accountability, such as those in the EU’s AI Act and the US's Algorithmic Accountability Act.

The rise of AI governance committees

Many organizations, recognizing the transformative nature of GenAI, are creating a dedicated committee responsible for overseeing AI governance.  

These committees typically include representatives from relevant departments, such as IT, legal, compliance, and ethics. The goal of these committees is to ensure AI initiatives align with organizational values and regulatory requirements.

One of the first items on the agenda is often good data management. After all, how can you safely and securely deploy AI applications without a clear understanding of your data? Effective data management is the crucial first step in responsibly adopting AI.

Business owners looking to adopt AI must first focus on managing their data estate and gaining a clear understanding of their data before leveraging it for AI applications.

Conclusion: Innovating with AI, while mitigating risk

As we’ve seen, governments around the world have woken up to the risk inherent in GenAI development and usage. They have responded with a range of frameworks and laws tailored to their environment. We have seen a split between sector-based approaches and broad cross-sectoral laws. While they may differ on the approaches, every law brings a requirement to businesses to manage their data better, and to do so proactively, from the very beginning.

If you are not sure where you stand, it’s time to find out.

Learn more about how investing in XAI and compliance is your essential next step in your AI journey.

Discover Connectors

View our expanded range of available Connectors, including popular SaaS platforms, such as Salesforce, Workday, Zendesk, SAP, and many more.

Explore the platform

Protect customer privacy and your business

Know your data is complete and compliant with RecordPoint Data Privacy.

Learn More
Share on Social Media
bg
bg

Assure your customers their data is safe with you

Protect your customers and your business with
the Data Trust Platform.