Assure your customers their data is safe with you
Protect your customers and your business with
the Data Trust Platform.
Lawmakers around the world are responding to GenAI risk with new requirements for companies. Learn about key laws and why AI governance is essential to ensure you comply.
Published:
Last updated:
For many, Zillow is the website you browse in your free time, idly imagining yourself starting your journey to property mogul with a mansion in Tuscany or a fixer-upper in Tucson. But if you browsed the site in the midst of the COVID-19 pandemic, and found yourself making a bid, you may have been competing with an algorithm.
In 2021, the company deployed what it called Offers, a home-buying division that would use tailored algorithms to find underpriced houses it could buy, improve, and then flip for a profit. The effort was at one point a major plank in the company’s strategy – CEO Richard Barton aimed to buy 5,000 homes a month by 2024. The problem? The models couldn’t account for the complexities of a volatile market, there was insufficient oversight, and as a result, the company was consistently overpaying for homes.
The impact for Zillow: the company had to write off $304 million in Q3 2021, close its Zillow Offers home buying division, and lay off 25% of its workforce. The company had about 7000 homes it needed to unload at a loss.
In the three years since the Zillow Offers debacle, machine learning and Generative AI (GenAI) have grown more sophisticated and more widely deployed. Whether a commercial model like Gemini or ChatGPT, or an in-house implementation, every organization now faces pressure to incorporate GenAI into their strategy. But if a large technology company like Zillow struggled with implementation and governance for a (presumably) simpler machine learning model, advances in technology have not been accompanied by improvements in governance.
Indeed, since its widespread deployment in 2022, GenAI has seen its fair share of mistakes and breaches of trust. To pick a few:
The risk from GenAI can come at all points in the technology's development and production lifecycle. Research shows only 24% of GenAI initiatives are being secured, which threatens to expose the data and models to breaches.
Whether it is personally identifiable information (PII) in the training data, PII in the corporate data sets, or GenAI output being of poor quality or malicious, there is significant risk inherent in using this technology. The question is how to govern AI to remove as much risk as possible at all stages.
Fortunately, the governments of the world have not stood still while this revolution has occurred and have been busy working on their own answers to this question. Building on the progress in privacy legislation over the last decade, nations across the globe have begun work on regulating this new technology.
Wherever your business is based, your leaders and lawmakers are considering their response to this AI revolution. Some countries have moved faster than others and established themselves as models when it comes to AI laws and regulation.
Let’s review how they are doing, and what may be ahead of us in the AI regulatory landscape. We’ll start with a review of AI laws and regulations in a handful of key regions and countries, before moving to consider what we can expect in 2025, and how organizations should respond.
On that last point, no matter where you are based, the time to address this is now. Even if the government in your jurisdiction has yet to address AI regulation, you need to be proactive. Like privacy and cybersecurity, we need to integrate privacy, security, and ethical considerations into the development and deployment of AI systems from the very beginning, rather than waiting for a Zillow-sized catastrophe.
As we’ve discussed, when it comes to AI governance and some countries have taken the reins early. Let’s take a world tour, starting with the leaders and moving to those whose efforts are less developed.
Just like the General Data Protection Regulation (GDPR) did for privacy before it, the EU's AI Act, finalized in December 2023, aims to set a global standard for regulating AI. The Act sets harmonized rules for AI in the EU market, focusing on a risk-based approach. It bans certain AI systems and sets transparency rules, alongside earlier communications on AI strategy from 2018. In addition to this, the Act:
The Act went into force on August 1, 2024, though certain provisions will apply at different dates; notably, penalties will apply from August 2, 2025, with the exception of fines for providers of General-Purpose AI models. For more on the AI Act, and how to ensure your organization complies with the law, read our article on the subject.
So far, the UK has adopted a flexible, context-based approach to AI regulation, leveraging existing sectoral laws to impose guardrails on AI systems. It offers resources like the AI Standards Hub, guidance on AI ethics, and frameworks for public sector AI use and auditing.
Striking a balance between encouraging AI development and maintaining ethical oversight, the government has made the following resources available for policy guidance:
For those tracking the evolution of privacy regulations in the US, the story here will feel familiar: a mix of legislative acts, executive orders, and voluntary frameworks for AI governance, key laws and bills covering certain aspects of AI governance, with state-specific laws filling the gaps. The government has also promoted AI trust through frameworks like the AI Bill of Rights and the NIST AI Risk Management Framework.
The US has released numerous frameworks and guidelines. Congress has passed legislation to preserve US leadership in AI research and development, as well as control government use of AI.
In May 2023, the Biden-Harris administration updated the National AI Research and Development Strategic Plan, emphasizing a principled and coordinated approach to international collaboration in AI research.
The Office of Science and Technology Policy has issued a request for information to obtain public input on AI's impact. The National Telecommunications and Information Administration sought feedback on what policies can create trust in AI systems through an AI Accountability Policy Request for Comment.
As with privacy law, while the federal government establishes a legal framework, the states won’t wait. Colorado, Utah, and California have passed their own bills regulating the use of AI, while Illinois, Massachusetts, and Ohio have bills in committee. California is a particularly important case, given the state is where many of the technology companies behind the models are based. The state has passed a raft of AI laws related to AI transparency in 2024, though Governor Gavin Newsom vetoed a bill relating to the regulation of frontier AI models.
In addition to formal laws, state bodies have provided their own advice and guidelines, for example, the New York Department of Financial Services provided AI cybersecurity guidance, focused on cybersecurity incidents that can arise from AI, as well as strategies to mitigate related risks in light of AI advancements and increased reliance on AI.
The Australian federal government has taken a proactive approach to AI regulation, which has focused on updating and enhancing existing frameworks. An AI Action Plan (2021) has promoted trusted, secure AI, with the government saying its aim was to establish the country as “a global leader in developing and adopting trusted, secure and responsible AI”.
In 2023, the Department of Industry, Science, and Resources released a discussion paper on safe AI (RecordPoint’s submission).
The government provided an interim response in January 2024, highlighting 10 existing legislative frameworks that would require amendments to accommodate GenAI, as well as the fact that existing laws likely do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur.
The federal government committed to:
Canada is advancing the AI and Data Act (Bill C-27) to regulate high-risk AI systems, protect human rights, and ensure responsible AI development. The government also issued a code of practice for GenAI and a Directive on Automated Decision-Making for federal use.
Canada's anticipated AI and Data Act, part of Bill C-27, is intended to protect Canadians from high-risk systems, ensure the development of responsible AI, and position Canadian firms and values for adoption in global AI development. The AIDA would:
Generative AI is a young, disruptive technology, and we’re still early in regulating it. But key trends are visible, and when we look ahead to the future, there are a few obvious themes to pick out.
Countries like the EU and Canada are adopting a risk-based approach to AI regulation, classifying AI systems by their potential impact on safety and human rights. Expect more countries to follow suit as they seek to regulate high-risk AI applications while promoting innovation in lower-risk sectors.
AI governance is heavily influenced by privacy regulations, with frameworks like GDPR and CCPA setting the standard for data protection.
Heading into 2025, privacy by design and data minimization will continue to be major pillars of AI regulation, ensuring that AI systems respect individual rights and comply with global privacy laws.
The push for ethical AI is only intensifying, with more countries focusing on preventing algorithmic biases and ensuring equitable outcomes. Transparency, accountability, and fairness are becoming non-negotiable in the development of AI models.
In 2025, we will likely see further moves toward standardizing ethical AI practices, along with the creation of more AI ethics frameworks.
A clear division has emerged between states that favor broad, cross-sector regulation (such as the EU AI Act), and countries like the UK are adopting more flexible, sector-specific approaches.
As 2025 approaches, expect a debate over the benefits of comprehensive regulations versus industry-specific standards, with both approaches likely to coexist in many regions.
While states debate the right way to regulate AI, they will also need to cooperate and establish globally aligned standards to reduce complexity.
In 2025, governments will increasingly recognize the need for cross-border regulatory alignment to ensure consistency and reduce compliance complexity for businesses operating in multiple jurisdictions.
Great AI governance begins with solid data management, and RecordPoint is perfectly positioned to help organizations build a foundation for responsible, secure, and compliant AI use. Here's how:
Many organizations, recognizing the transformative nature of GenAI, are creating a dedicated committee responsible for overseeing AI governance.
These committees typically include representatives from relevant departments, such as IT, legal, compliance, and ethics. The goal of these committees is to ensure AI initiatives align with organizational values and regulatory requirements.
One of the first items on the agenda is often good data management. After all, how can you safely and securely deploy AI applications without a clear understanding of your data? Effective data management is the crucial first step in responsibly adopting AI.
Business owners looking to adopt AI must first focus on managing their data estate and gaining a clear understanding of their data before leveraging it for AI applications.
As we’ve seen, governments around the world have woken up to the risk inherent in GenAI development and usage. They have responded with a range of frameworks and laws tailored to their environment. We have seen a split between sector-based approaches and broad cross-sectoral laws. While they may differ on the approaches, every law brings a requirement to businesses to manage their data better, and to do so proactively, from the very beginning.
If you are not sure where you stand, it’s time to find out.
Learn more about how investing in XAI and compliance is your essential next step in your AI journey.
View our expanded range of available Connectors, including popular SaaS platforms, such as Salesforce, Workday, Zendesk, SAP, and many more.
Know your data is complete and compliant with RecordPoint Data Privacy.
Protect your customers and your business with
the Data Trust Platform.