AI regulation begins to bite, just in time
With AI laws coming into effect, new technology coming online, and a US focus on growth, the AI landscape is changing fast – how to keep up.
Subscribe to FILED Newsletter
Hi there,
Welcome to FILED Newsletter, your round-up of the latest news and views at the intersection of data privacy, data security, and governance.
This month:
- 96% of S&P 500 companies have had data breaches, according to new analysis
- Is Elon Musk’s Department of Government Efficiency an official agency? A court case on its access to data may hinge on answering that question
- The British government is dropping the fitness and weapons training for specialist cyber military recruits
But first, as the European Union’s AI Act begins to come into effect, and other laws are on their way, what are companies doing about it?
If you only read one thing:
AI regulation begins to go into effect in the EU, while the US hits the accelerator
Only a month into 2025, and the world of AI – and AI regulation – has already seen a lot of change. Let’s review where things are at, and what businesses need to do.
February 2 saw the first measures of the European AI Act take effect, with a ban on certain uses of AI including social rating software, individual predictive policing via profiling, and real-time facial recognition in public spaces.
The next stage of enforcement comes in August, when makers of general-purpose models like OpenAI’s ChatGPT or Google’s Gemini will be required to provide transparency on their technical documentation and training data.
An emerging international divide
Meanwhile, world leaders converged on Paris last week for a two-day Artificial Intelligence Action Summit. The event featured buzzy announcements like a planned €150 billion investment in European AI, the creation of a new public interest foundation to focus on AI transparency, and the development of AI systems in defense.
Amidst all this international cooperation, there came a (perhaps unsurprising) refusal from the United Kingdom and the United States to sign an international agreement on an "open", "inclusive" and "ethical" approach to the technology's development. US Vice President JD Vance said too much regulation could kill the industry “just as it’s taking off,” while a statement from the UK government pointed to concerns about national security and “global governance.”
Indeed, with the new US administration came a new AI Executive Order, with President Trump revoking President Biden’s earlier AI Executive Order and ordering a new AI Action Plan be developed in 180 days. These actions signal a deregulatory approach and a move away from AI governance principles. The order came days after President Trump announced “Project Stargate”, an initiative to invest US $500 billion in AI infrastructure.
What could drive such an investment?
Geopolitics, of course. Chinese firm DeepSeek made a fuss last month when its R1 “reasoning” model showed performance comparable to ChatGPT and Gemini models but apparently built for a fraction of the cost (though this is disputed). The announcement spooked the market, with U.S chipmakers like Nvidia suffering massive market losses. Models like R1 suggest traditional LLMs are at risk of commoditization, and the market now appears to be shifting to “agentic” models that can perform tasks on users’ behalf.
So, in summary: we have signs of disruption to the US’ tech dominance by a rising China, leading the US to focus on deregulation of the AI industry, all while the EU attempts to paint itself as the safe, regulated AI alternative that is nonetheless poised for growth.
Also, Elon Musk wants to buy OpenAI.
Where does that leave businesses?
If the AI landscape is getting more unpredictable, and more focused on “innovation” than safety, businesses should focus on establishing control over their own domain.
The first step: understand your obligations under AI or privacy laws, which may depend on your location or industry. While the US federal government is moving away from regulating AI, that doesn’t mean the states have stopped.
Secondly: Establish some form of AI governance beyond a blanket rule against using AI platforms. This kind of knee-jerk policy doesn’t work, and frequently leads to “Shadow AI”, where employees surreptitiously use AI models, usually the free consumer versions. A recent study suggested half of employees did this. Such activity increases the risk that sensitive customer data, or confidential company data, will be used to train these models, along with other dangerous outcomes.
Thirdly: Don’t get distracted with fancy new technology, especially when you don’t understand the regulatory environment in which it was created. There are risks with using any AI platforms for business use. Just this month OpenAI investigated claims of a data breach, for example. So, it wasn’t surprising when DeepSeek’s model fell victim to jailbreaking and a data breach. But with DeepSeek the risks are different, with critics pointing to the legal and compliance issues that come with a model based in China. You probably shouldn’t use R1 in your organization, and authorities like the Australian government, and most Australian states and territories have already banned the technology for their employees. South Korea banned downloads of the app in general, and Italy’s data protection authority ordered a limitation on DeepSeek’s processing of Italian users’ data. US lawmakers are pushing for a ban of their own.
Finally: (Try to) keep up with the changes in the industry. There has been a lot of movement in the industry in just the first month and half of the year. No doubt 2025 has many more surprises in store.
🕵️ Privacy & governance
A lawsuit challenging Elon Musk’s Department of Government Efficiency’s access to computer systems at a trio of federal agencies may fall on the issue of whether the group is actually an official agency.
An overview of the Donald Trump administration's tech policy so fair in its second term.
A Florida-based data broker apparently obtained sensitive data on US military members in Germany from a Lithuanian firm, revealing the global nature of online ad-tech surveillance
🔐 Security
The Australian government released guidance on proactive cyber defense strategies for enterprises, such as adopting zero trust and secure-by-design principles to enhance cyber resilience.
The latest from RecordPoint
📖 Read:
With the growth in advanced AI, companies have been struggling to govern the technology and reduce risk. Learn the key elements of a strong AI governance function, and how to implement them in your organization.
And – slightly more technical – understand what a database schema is, and how it should factor into your data governance practices.