How will the US election impact AI and privacy regulation?
With US president-elect Donald Trump signaling a lighter regulatory approach, should businesses still invest in privacy and AI governance?
Subscribe to FILED Newsletter
Hi there,
Welcome to FILED Newsletter, your round-up of the latest news and views at the intersection of data privacy, data security, and governance.
This month:
- Australia’s financial regulator is warning firms they need to prioritize AI governance
- Is “shifting privacy left” still a valid concept?
- A cyberattack on oil giant Halliburton cost $35 million by end of September
But first:
If you only read one thing:
What will Trump’s victory mean for AI and privacy policy, and should you adjust your strategy?
It’s just a week since the United States election, and everyone is still trying to work out what it means. It's a great time to be a pundit, with a lot of uncertainty in the air and some high stakes. But let’s leave the social and political analysis to the experts and look at how the 47th president plans to deal with regulation, in particular with AI and privacy.
In both cases we should see a lighter regulatory approach.
On privacy, while Democratic control of the Federal Trade Commission and the Federal Communications Commission would have led to more aggressive privacy regulation, a Trump administration will likely adopt a more hands-off stance. Trump will likely appoint three Republican-leaning commissioners to the FCC, meaning broadband privacy rules may not be adopted.
At the FTC, according to one researcher, Pivotal Research Senior Analyst Brian Wieser, "Privacy issues may become non-issues ... Owners of data will benefit from this election."
Outside the executive, the American Privacy Rights Act (APRA), the landmark bipartisan legislation that has stalled recently, may not have much of a future. While the outcome of the House is still pending, APRA is unlikely to get far with a Republican Senate.
With AI we see a similar story, with softer stance on AI regulation likely.
Trump has said he intends to dismantle Biden’s AI policy framework – in particular the AI Executive Order, a landmark policy that is just over one year old – as soon as he comes into office. Republicans had previously criticized two aspects of the EO as onerous and with the potential to force companies to disclose their trade secrets.
- One provision imposed requirements on companies developing powerful AI models to report to the government how they’re training and securing them, and to submit them to tests designed to probe for vulnerabilities,
- Another criticized provision directed Commerce Department’s National Institute of Standards and Technology (NIST) to offer guidance that helps companies identify — and correct for — flaws in models, including biases.
We don’t know what the Trump administration will replace them with, with the candidate’s statements about a desire to “support AI development rooted in free speech and human flourishing,” all we have to go on.
Senator Chuck Schumer’s bipartisan plan for AI legislation is also likely dead given Republican control of the Senate, where Senators like Ted Cruz have advocated for lighter regulatory approach to the data.
So, what should you do?
With the potential end of these requirements and an overall more “hands-off” approach to regulation, should US companies – or those wishing to do business in the US – relax their AI governance and privacy efforts and call it a day? In my view, such an approach would be unwise and focused on the short-term.
You should enact strong governance for privacy and AI for two reasons. You want to be able to adapt to a shifting regulatory environment. Things may change in your state, and you want to be ready when they do.
You should enact strong governance because it’s the right thing to do for your customers. Just because you won’t get audited, it doesn’t justify a laissez faire approach to governance. By managing your data correctly, you will improve its security and demonstrate trust to your customers.
Even without efforts on AI and privacy, companies will still need to negotiate a patchwork of state-based laws.
Nineteen US states have enacted their own privacy laws, and states like California, Colorado, and Utah are already enacting AI-related laws. Others are likely to follow, with each one potentially increasing your compliance costs and complicating deployment strategies. Overseas, we of course have the General Data Protection Regulation and the EU’s AI Act setting the standard for other nations to follow.
So, specifically, what should you do?
The reason I’ve grouped privacy and AI together is that they are interconnected: good governance in one will enable the other. The US National Institute Of Standards And Technology (NIST)’s AI risk management framework includes “privacy enhanced” as a key characteristic of what it calls “trustworthy AI”. Focus on understanding your data so you can manage it appropriately to reduce the privacy risk, and ensure transparency and accountability. Then implement security measures to protect sensitive information and address privacy concerns. Detect and filter out PII and apply least-privilege principles to ensure critical data is secure.
Borrow from Australia’s Federal government, whose the Office of the Information Commissioner provided advice on privacy and developing and training generative AI models and on the use of commercially available AI products.
And borrow from us: we recently produced an article on navigating AI privacy and regulation, there is a lot there for those unsure where to start.
With every change in government comes uncertainty; it's best not to take your eye off the ball.
🕵️ Privacy & governance
The Australian federal government offered a vivid example of the dangers of GenAI, with a trial of Microsoft Copilot highlighting poor information, data management practices and permissions, which resulted in inappropriate access and sharing of sensitive information.
Australia’s financial regulator also warned firms they needed to prioritize AI governance.
Meanwhile, 15 data protection agencies from countries including Australia, Canada, and the United Kingdom issued a joint statement to major social media platforms regarding data scraping and privacy protection. Among other things, the statement says organizations must comply with privacy and data protection laws when using personal information, including from their own platforms, to develop AI large language models (LLMs).
Is the concept of Shifting Privacy Left still valid?
🔐 Security
Amazon confirmed employee data was compromised due to a “security event” at a third-party vendor.
A cyberattack on oil giant Halliburton cost $35 million by the end of September, according to the company’s latest financial report.
CISA is doubling down on “secure by design”, and shifting the security onus to software makers.
Related: how these major software firms are implementing secure by design.
Threat actors are sending fake emergency data requests with the goal of harvesting personally identifiable information (PII), according to an FBI warning.
The latest from RecordPoint
📖 Read:
Explore the benefits and challenges of managing health data at scale, along with best practices for improving your data handling processes.
Compliance with the Gramm-Leach-Bliley Act is a legal obligation for US financial institutions. Learn what GLBA compliance involves, and how to ensure your organization meets the requirements.
Is it time you modernized the way your organization manages data? Use our roadmap for guidance as you start your journey to moving on from old, outdated systems.
Your HIPAA compliance checklist: essential steps for complying with the regulation and protecting patient data.
🎧 Listen:
Lucid Privacy Group founder Colin O’Malley joins FILED my co-host Kris Brown to discuss how he helps organizations to right-size their privacy function to respond to the unrelenting pace of change. I was very sorry to miss this one, it was a great conversation even with my absence.