You don’t need AI laws to establish strong AI governance

AI risk hasn’t gone away just because the law has.

Anthony Woodward

Founder/CEO

March 18, 2025
Get your monthly round-up of the latest news and views at the intersection of data privacy, data security, and governance.

Subscribe to FILED Newsletter

Get your monthly round-up of the latest news and views at the intersection of data privacy, data security, and governance.
Subscribe Now

Hi there,  

Welcome to FILED Newsletter, your round-up of the latest news and views at the intersection of data privacy, data security, and governance.  

This month:

  • Alleged LockBit ransomware developer extradited from Israel, while the group targets vulnerabilities in Fortinet products
  • US politicians, civil rights campaigners and the BBC want a public hearing on the data privacy fight between Apple and the UK government
  • And will open source GenAI models be just what the doctor ordered for patient privacy?

But first, should companies relax their AI governance efforts due to an apparent pullback in AI regulation? (Spoiler: no)

What is regulation for, anyway?

Unless you live under a rock, you will have noticed the AI regulation landscape has been under a bit of change lately. Quickly: soon after assuming the US Presidency, Donald Trump revoked the Biden Executive Order on AI, also making a separate Executive Order requiring a new AI Action Plan be developed in 180 days

With separate moves such as the appointment of David Sacks as the White House crypto and AI czar and the announcement of Project Stargate, an investment of $500 billion in AI infrastructure, the assumption was that this new approach would be growth-focused and deregulatory.

Indeed, OpenAI’s submission on the AI Action Plan involved a request to shield AI companies from proposed state AI regulations if they voluntarily share their models with the federal government, as well as provide AI companies with access to government data, including healthcare information. Oh, and copyright reform, so they have access to more data under fair use. And Google also called for weakened copyright rules for data used in training AI.

Any change in approach to AI regulation would impact how AI platforms are built – what data are they trained on? How are things like privacy, security, ethics and bias handled? – but also how businesses leverage the platforms. With fewer regulatory barriers in place, businesses could plunge headlong into AI in pursuit of growth, without worrying so much about the ethical or legal implications. They could feed their confidential or sensitive customer data to AI models, ignore sources of bias, and then replace swathes of employees with AI agents.

Some companies may see the pull away from regulation as an opportunity, yes. But those businesses would be conflating compliance with risk. The risk hasn’t gone away just because the law has. It’s just up to your company to decide how to manage that risk. And companies that behave legally, but not ethically, can be punished in other ways.

Send in the goons

Our guest in next week’s episode of FILED, Paul Sonntag, has an approach that he calls the “Goon Test”. To wit: “if you wouldn’t or couldn’t hire goons to do something, you probably shouldn’t use technology to do that thing.”  

For example, to find efficiencies in your business, by cutting costs or reducing staff, you may decide to collect more data about how your employees spend their day. But would you hire two goons to collect the data by following employees around – stopwatches in hand – to see how long it took them to complete tasks, or how long they spend in the bathroom? Would that be the right thing to do for the customer, or for your employees? Would you be comfortable explaining that approach to your company board? If the answer to any of those questions is no, then you probably shouldn’t use goons (or a technology like AI that would achieve the same ends.) And you shouldn’t need a regulator to tell you that.

Businesses shouldn't wait for new AI-specific regulations to establish their own governance practices. Things like closely monitoring employee productivity through AI may be legal, but would be unethical and bad for business. Regulation always lags behind - it's up to companies to proactively govern their use of AI based on ethical principles, not just legal compliance.  

Surviving in the AI Wild West

It is worth putting in the effort to establish your own guardrails and controls. If it’s now the AI Wild West, you don’t want to be the company that in 10 years realizes they didn’t prepare for AI properly, and either missed the wave or implemented it poorly. You don’t want to be Air Canada, whose AI chatbot offered fictitious discounts to grieving customers.  

The real challenge is for businesses to think holistically about how they'll implement AI - what data will it process, what will the impacts be, how will it change the business as a whole?  

A lot of businesses have struggled to implement new tools like Microsoft Copilot, finding themselves stuck at the pilot phase. According to one study, only 6% of businesses reported moving to a large-scale deployment of Copilot. The number one reason is poor governance of unstructured data.  

These tools represent a fundamental shift in how a business operates, not just a new tool to offer your employees. When you bring in transformative AI technology, you have to rethink your entire business model and governance approach. It's not enough to just comply with the law – you have to make sure you're governing your data, and using the technology responsibly and ethically, in a way that benefits your customers and employees.  

The key is for businesses to get ahead of the curve on ethical AI governance, not just wait for new regulations. By proactively aligning their use of AI with principles of customer focus and employee wellbeing, they can unlock the benefits of the technology while mitigating the risks. This kind of responsible, forward-looking approach will be critical for success in the years to come.  

🕵️ Privacy & governance

US politicians, civil rights campaigners and the BBC are all calling for a UK High Court hearing on the data privacy row between Apple and the UK government to be held in public.

The Office of the Australian Information Commissioner (OAIC) launched a new digital ID regulatory strategy to encourage people and businesses to shift to safer and more protective means of ID verification and ensure privacy is respected in Australia’s Digital ID System.

Interesting take: open source AI models that perform at the same level as OpenAI's GPT-4 are a potential boon for physicians seeking patient privacy.

🔐 Security

🔓Breaches

An espionage group operating out of China is targeting routers made by Juniper Networks, according to incident responders from Mandiant.

Denmark’s cybersecurity agency published a threat assessment on Thursday warning of an increase in state-sponsored cyber espionage activities targeting the telecommunications sector in Europe.

The Medusa ransomware gang presents a major threat to the critical infrastructure sector, according to a newly released joint advisory from the FBI, Cybersecurity and Infrastructure Security Agency (CISA) and the Multi-State Information Sharing and Analysis Center (MS-ISAC).

🧑⚖️Legal cases & breach fallout

Australian financial services firm FIIG Securities faces legal action from the Australian Securities and Investments Commission (ASIC) following a cybersecurity breach that exposed sensitive information of 18,000 clients, with the regulator alleging the firm failed to maintain adequate cybersecurity measures for over four years.

The conviction of former Uber chief security officer Joe Sullivan on obstruction of justice charges was upheld by the U.S. Court of Appeals for the Ninth Circuit in California this week. A federal jury convicted Sullivan of two charges related to his attempted coverup of a 2016 security incident at Uber, where hackers stole the personal details of 57 million customers and the personal information of 600,000 Uber drivers.

An alleged developer behind the LockBit ransomware was extradited from Israel on Thursday, appearing in front of a New Jersey court. He is facing 40 charges related to computer damage and extortion.

Speaking of LockBit, two vulnerabilities impacting Fortinet products are being exploited by a new ransomware operation with ties to the group.  

The latest from RecordPoint  

📖 Read:

Organizations must prioritize robust AI governance policies, procedures, and platforms to mitigate AI risks and enable the safe and secure use of AI tools. Confused about how to start? Read our guide to mitigating AI risk.  

Data democratization means making data more accessible for everyone within a business, regardless of their technical expertise. Learn about how to enable it in your organization and how it can improve your data governance practices.  

Reliable data is valid, complete, unique, and reproducible. It’s also free from any errors and inconsistencies. Learn more about data reliability, and why it’s important.

With more than 25 years of experience between them, RecordPointers Shelly Wang, Mick Sowl, and Lauren Hubner have contributed to dozens of digital transformation projects. Here’s what they’ve learned.  

Heading to IAPP GPS 2025 in DC? We are! Check out a sneak preview of what we’ll be sharing.

🎧 Listen:

We’re back! In the first episode of Season 3 of the FILED Podcast, Kris and Anthony discuss the biggest issues they missed over the FILED hiatus, from AI law developments to DOGE’s access to government data.

bg
bg

Get hooked on FILED

This can be a fast-paced, complex industry and it can get overwhelming. FILED is here to help you navigate it.