Mitigating AI risk in your organization

Organizations must prioritize robust AI governance policies, procedures, and platforms to mitigate AI risks and enable the safe and secure use of AI tools. Confused about how to start? Read our guide.

Adam Roberts

Written by

Adam Roberts

Reviewed by

Published:

March 5, 2025

Last updated:

Mitigating AI risk in your organization

Finding it hard to keep up with this fast-paced industry?

Subscribe to FILED Newsletter.  
Your monthly round-up of the latest news and views at the intersection of data privacy, data security, and governance.
Subscribe Now

With satellites orbiting Earth capturing a wealth of climate and geospatial data every day, one thing NASA is not short of is data. The organization possesses more than 100 petabytes and counting of geospatial data – everything from atmospheric conditions to land cover changes, ocean temperatures and more.  

But collecting all this data is one thing; making use of it is quite another. Indeed, until recently unless you were a specialist researcher or scientist, the data was almost impossible to navigate. This (highly technical) data represented an extremely large haystack.

The team at NASA recognized the vast potential of their earth science data, but knew it was underutilized due to its scale and the complexity of accessing and working with it. Enter: generative AI (GenAI).

Together with Microsoft, NASA developed an AI assistant called "Earth Copilot" to simplify and democratize this process. Now, researchers, policymakers, and the general public can more easily explore and extract insights from NASA's treasure trove of earth observation data, by asking questions in plain language, for example, “How did the COVID-19 pandemic affect air quality in the US?” The assistant will then return datasets to answer the query.

Earth Copilot is a particularly vivid example of how AI tools is rapidly transforming the way we work across industries. From broad GenAI models like ChatGPT, which can be seen as a more robust search engine, to agentic AI experiences that complete specialized tasks, the impact of AI is becoming undeniable.  

This is often discussed on an individual level, with employees who resist incorporating AI into their workflow often reminded (on LinkedIn, usually) that, “AI won’t take your job, but a human using AI will.” The principle also applies at a company level.

AI’s transformative potential

While we’ve seen how NASA embraced GenAI, not every organization has NASA’s vast technical skills and experience. But it tuns out there is no shortage of inspiring case studies of AI applied to all types of organizations, often with significant impact. Here are two examples from two quite different businesses:

Ally Financial's AI-powered customer service

Digital financial services company Ally Financial leveraged Microsoft Azure and Azure OpenAI Service to assist its customer service representatives. The model helps with documentation on customer calls, recording the content of the call at 85% accuracy and reducing the post-call effort for reps by 30%, with a target of 50%. This frees up time for reps to focus on providing more personalized and empathetic support, leading to higher customer satisfaction.

Petbarn's "PetAI" for personalized recommendations

Australian pet supplies retailer Petbarn launched an AI-powered assistant called "PetAI" to give pet owners highly tailored advice and product recommendations. The model draws on a deep understanding of pet care, nutrition, and customer preferences to provide a personalized experience. Pet owners can quiz the assistant to get answers to questions like whether it is safe for dogs to eat Vegemite toast (“Generally not recommended ... as it contains high levels of salt”) – or the best way to introduce puppies to other dogs (“gradually and in a controlled manner.”) This service has resulted in increased customer engagement for Petbarn.

Not every company is AI-ready

As AI capabilities continue to advance, we can expect to see even more transformative applications emerge in the years to come. But while AI can bring transformational potential for any organization, not every organization is ready. A 2024 Gartner survey showed that while 60% of companies had begun pilot projects to deploy Microsoft 365 Copilot, only 6% had finished their pilots and were actively planning large-scale deployments. Only 1% had completed a Copilot deployment to all eligible office workers in their organization. A subsequent Gartner study found the primary reason for failed Generative AI (GenAI) deployments is inadequate data readiness.

The risks of uncontrolled AI usage

With all of the AI hype targeted at individuals and organizations, it’s no wonder people are urgently looking to apply AI in their role or organization. Unfortunately, employees often dive in with little guidance.  

Despite the leaps and bounds of progress AI can provide, most organizations have little understanding of how to manage it. Without clarity on how to approach AI securely, many companies are either banning the technology entirely or offering no guidance at all. Silence in this case may be interpreted by employees as tacit endorsement of GenAI usage. And in the event of a failed pilot program, employees may feel even more justified in seeking out GenAI tools themselves.

The biggest risk for companies is thus uncontrolled AI usage, where employees use AI tools through personal accounts instead of controlled, enterprise instances. GenAI tools like ChatGPT are widely available, with free plans making the barrier to entry virtually non-existent.  

A significant amount of ChatGPT usage is happening on personal accounts, not enterprise-managed ones. According to a study from Software AG, half of all employees are using “Shadow AI” (unsanctioned AI), seeking productivity gains, a desire to be independent, and tools their employers do not offer.

Another survey from Cyberhaven showed that 73.8% of workplace ChatGPT usage occured through public, non-corporate accounts , with numbers even higher for Gemini (94.4%) and Bard (95.9%).

Uncontrolled AI usage can lead to serious consequences.

Data leakage and exposure of sensitive information

Employees using personal accounts to access AI tools can lead to the exposure of sensitive company data and intellectual property, including PII/PCI and other types of sensitive customer data. All the care you’ve put into securing internal networks and selecting secure tooling can be undone in a minute when an employee copies and pastes sensitive customer data into a GenAI tool.  

Remember, data entered into a consumer version of GenAI platforms like ChatGPT can be used to train their models.

Compliance violations

While AI laws are being introduced across the world, building on progress in privacy legislation, major regulations like the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and Australia's Privacy Act 1988 have mandates for the safe use of data in AI tools. Penalties for violations can reach up to 4% of a company's total global turnover.

The EU’s AI Act was finalized in December 2023 and its measures are beginning to be enacted. Even if the government in your jurisdiction has yet to address AI regulation specifically, you need to be proactive.

Lack of control and visibility

Beyond the potential for data exposure, users could be breaking other organizational rules by using AI, such as shortcutting essential parts of their job that require human oversight, or by making decisions on stale and out-of-date data. If you adopt a laissez faire approach to AI governance, such missteps may result in poor customer experience or lower quality outcomes.

Does all this risk sound a bit overblown? Unfortunately, there are plenty of examples of what can go wrong if AI governance is not established and GenAI usage is surreptitious and poorly managed. One particularly egregious example came at an Australian state department in 2024. A worker at Victoria’s child protection agency, part of the Department of Families, Fairness and Housing was found to have entered substantial amounts of personal information, including the name of an at-risk child, into ChatGPT. Subsequent investigation found the worker may have used ChatGPT in more than 100 cases in drafting child protection-related documents. Across the department from July to December 2023, nearly 900 employees had accessed the ChatGPT website, accounting for almost 13% of the workforce.

The case led to a ban on staff using generative AI services at the department.

Establishing AI governance

To mitigate these risks, organizations need to embrace AI governance. AI governance involves strong data governance, AI training, and establishing suitable AI policies and procedures to define things:

  • Acceptable AI usage
  • Data management and security
  • Bias and fairness
  • Transparency and explainability
  • Human oversight
  • Third-party AI usage – vendor-provided AI tools

An AI governance committee can help enable the safe and secure use of AI tools within the organization. This committee should include stakeholders from across the business, such as legal, ethics and compliance, privacy, information security, research & development, and product engineering and management.

Influencing leadership to embrace AI governance

Convincing leadership to invest in AI governance can be a challenge, but it's a necessary step to protect the organization. Any discussion of return on investment should be paired with a reminder of the risks. For example, surveys show Microsoft Copilot could save the average white-collar worker 10 hours per month. For the average 1200-person enterprise, this represents savings of between $5-7M per year. This is a significant reduction, but one that would need to be accompanied by investment in AI governance to ensure the potential return-on-investment is not undermined by a costly data breach or compliance issue.  

Fear tactics shouldn't be off the table – the risk is real. Gain buy-in by getting people from across the business invested in the AI governance committee. Emphasize that a robust AI governance framework is essential for enabling the safe and secure use of AI tools, which can drive innovation and efficiency within the organization.

Make sure they understand: doing nothing is the most dangerous approach.

How RecordPoint can help

RecordPoint offers solutions to help organizations prepare and protect their data to speed up AI rollout and reduce the risk of uncontrolled AI usage:

Reduce sensitive data exposure

Protect sensitive information and intellectual property with pre-vetted or user-created datasets for AI model training. Detect and filter out PII, sensitive, and regulated data and apply least-privilege principles to ensure critical data is secure.

Stay ahead of compliance requirements

Proactively manage AI data to ensure compliance with the GDPR, CCPA, and emerging AI regulations. Track data sources, maintain audit trails, and create structured review processes to manage risk.

Secure integration and deployment

Integrate with all your preferred data sources and LLMs for seamless deployment. Easily connect to popular AI systems like ChatGPT, Copilot, or your own custom models, then validate using natural language queries on your data.

The time is now

The risks posed by uncontrolled AI usage are real and significant. Organizations must prioritize the implementation of robust AI governance policies, procedures, and platforms to mitigate these risks and enable the safe and secure use of AI tools.

By joining RecordPoint's AI Governance Early Access Program, you can take the first step towards protecting your organization's data and intellectual property while unlocking the full potential of AI. Don't wait – the time to act is now.

Discover Connectors

View our expanded range of available Connectors, including popular SaaS platforms, such as Salesforce, Workday, Zendesk, SAP, and many more.

Explore the platform

Protect customer privacy and your business

Know your data is complete and compliant with RecordPoint Data Privacy.

Learn More
Share on Social Media
bg
bg

Assure your customers their data is safe with you

Protect your customers and your business with
the Data Trust Platform.