How and why to embrace trustworthy AI

Any organization investing in GenAI needs to balance its productivity gains with the risk. To really reap the benefits of the technology, it must be trustworthy. Learn the elements of trustworthy AI and how to apply them in your organization.

Adam Roberts

Written by

Adam Roberts

Reviewed by

Published:

November 14, 2024

Last updated:

How and why to embrace trustworthy AI

Finding it hard to keep up with this fast-paced industry?

Subscribe to FILED Newsletter.  
Your monthly round-up of the latest news and views at the intersection of data privacy, data security, and governance.
Subscribe Now

Consider this scenario: an innovative, automated new technology is released to the world, promising safety and productivity benefits. The public is nonetheless wary: They don’t know how this new “automated” solution works, so they find it hard to trust it. The status quo thus holds for far longer – decades, even.

I’m not talking about an airplane's autopilot, driverless cars, or drones. I’m talking about elevators.

Elevators have been around since the 1850s, though the first models had to be operated by hand, by an elevator operator. It wasn’t until 1925 that the world’s first fully automatic elevator was introduced. But the advent of this technology did not spell the end of the elevator operator. People didn’t trust the automated technology, and so the operator remained, lending a feeling of security and reliability.

It wasn’t until the 1945 elevator operator strike that the technology finally became widely used. The strike was enormously disruptive, with 15,000 workers joining the picket line, leaving mail undeliverable, railways frozen, and causing federal tax collections to fall eight million dollars a day. Such an impact led building owners to push for change in the way elevators worked. After the strike, manufacturers put effort in improving the trustworthiness of the technology, with the introduction of the emergency phone and emergency stop. Automated elevators became the norm, but still decades later than technology allowed.  

The situation has a lot to teach us about the introduction of novel technology like generative AI (GenAI).

The  AI trust deficit

After the last few years of AI development, the average user now knows two main things about a GenAI platform like Open AI’s ChatGPT or Google’s Gemini.

  • These models are capable of generating plausible, legible, compelling output at incredible speed.  
  • This incredible output might also be incredibly wrong, either obviously implausible or subtly incorrect.

By now, we’ve all seen reports of AI offering advice on the correct number of rocks to eat every day, or why you should put glue on your pizza to prevent the toppings coming off. Indeed, so-called hallucinations are a major drawback of using GenAI platforms. The technology is not built to provide truthful output, just plausible output based on statistical patterns in enormous quantities of data. This means the models do a bad job of identifying irony or misinformation, and so will happily pass on bad data. Sometimes, we can actually pinpoint the exact cause of a given hallucination – the advice for eating rocks came from an article by satirical magazine The Onion — but just as often even the makers of the technology have no idea.

Hallucinations are a useful way to illustrate a significant problem with advanced GenAI models: they have become “black boxes”. It can be hard or even impossible to definitively explain why any particular decision has been made. As a result, while we’re happy to play around with the technology in a consumer setting, and indeed individual employees may (often surreptitiously) use it to streamline non-critical parts of their role, embracing GenAI for high-risk settings is more fraught. GenAI suffers from a trust deficit.

Why trustworthy AI matters

AI is a huge investment for any organization, and it can be a huge competitive advantage and cost-efficiency play. But any organization considering investing in GenAI needs to balance its productivity gains with the risk. And to really reap the benefits of the technology, it must be trustworthy.

As we’ve seen with elevators, it doesn’t matter if the technology works perfectly. The elevator technology worked reliably soon after it was released. What matters is that we can trust it.

Companies need to use tools to make AI more explainable, fair, robust, private, and transparent.

What is trustworthy AI?

Trustworthy AI emphasizes safety, transparency, and accountability in AI development, ensuring that interactions with the technology – whether from a stakeholder or a customer – are secure and reliable.

Developers of trustworthy AI recognize that no model is flawless. They actively communicate how the technology is designed, its intended applications, and its limitations, fostering understanding and trust among customers and the public.

Beyond adhering to privacy and consumer protection regulations, trustworthy AI undergoes rigorous testing for safety, security, and bias mitigation. These models are transparent, offering key insights — such as accuracy benchmarks and details of the training datasets — to diverse stakeholders, including regulators, internal teams, and consumers.

How do I build — and use — trustworthy AI?

As AI evolves, the frameworks for how we define trustworthy AI will evolve, though two prominent examples from NIST and Deloitte demonstrate a convergence of opinion. NIST defines trustworthy AI as including:

  • Validity and Reliability
  • Safety
  • Security and Resiliency
  • Accountability and Transparency
  • Explainability and Interpretability  
  • Privacy
  • Fairness with Mitigation of Harmful Bias

While Deloitte’s Trustworthy AI framework slightly reshuffles those elements, with trustworthy AI being:

  • Transparent and explainable  
  • Fair and impartial  
  • Robust and reliable
  • Respectful of privacy  
  • Safe and secure
  • Responsible and accountable

In both frameworks, you can see a blend of data security, compliance, and fairness. Trustworthy AI is not a one-dimensional concept, and achieving it will involve governance across all aspects of the AI workflow: AI development and integration, data management, all the way to how it’s deployed.

How RecordPoint views trustworthy AI

Building on these frameworks, we’ve put together our own perspective on the key elements of trustworthy AI, and how to address them.

Privacy: Respecting sensitive information in line with regulations

AI systems are often described as "data-hungry," with larger datasets generally leading to more accurate predictions and more capable/fluent models. However, while accuracy is important for all the reasons cited above, it must not be the only consideration when developing these models. Responsible AI development should consider not only what data is legally available but also what is ethically and socially responsible to use. Trustworthy AI involves safeguarding sensitive information and complying with regulations.

Handling personally identifiable information (PII), payment card information (PCI), and other personal information requires strict safeguards to maintain user trust and meet regulatory standards.

Demonstrating compliance with emerging regulations, such as the EU AI Act and other US state-specific AI laws, is critical to ensure that AI models respect privacy rights and operate within legal frameworks. California, Utah, and Colorado are paving the way for AI regulation at the state level, but more such laws are on the way, and companies need to be prepared.

Safety and security: Minimizing harm, securing the data

GenAI hallucinations aren’t the only reason for a lack of trust – a perceived lack of security is also an issue. According to recent research, a large majority of those surveyed – 83% of Australians, 72% in the US and 64% in the UK – see AI as a security risk when it comes to their data, while the same number of Australians want to see more transparency in how AI interacts with their data, compared to 81% in the US and 70% in the UK.

Once deployed, AI systems have real-world consequences, making it crucial to ensure they perform as intended to protect user safety and well-being.  

The widespread availability of public AI algorithms offers vast potential for beneficial applications. However, this openness also introduces the risk of the technology being repurposed for unintended or harmful uses, underscoring the need for careful oversight and ethical considerations.

When leveraging a public AI model with organizational data, there are three primary sources of risk when it comes to GenAI, and each must be considered when deploying a given AI model:

  • The training model data: as discussed, this data – hundreds of gigabytes, scraped from the internet – may contain PII.
  • Corporate data sets: your enterprise data may also contain customer or employee PII.
  • GenAI output: the content generated by a GenAI model may contain personal information, or sensitive data obtained by inference.

Despite the risk, research shows only 24% of GenAI initiatives are being secured, which threatens to expose the data and models to breaches.  

Transparency: Making AI explainable

To truly build a trustworthy AI model, the algorithm cannot function as a "black box." Understanding how a given model operates is essential for trusting its results.

Transparency in AI refers to a set of best practices, tools, and design principles that allow users and stakeholders to understand how an AI model was trained and how it functions. Explainable AI (XAI) is a subset of transparency, providing tools that clarify how an AI makes specific predictions and decisions.

While transparency and XAI are critical for building trust in AI systems, there is no one-size-fits-all solution. The right approach requires identifying who the AI impacts, assessing the associated risks, and implementing mechanisms to effectively communicate how the system works.

Retrieval-augmented generation (RAG) enhances AI transparency by linking generative AI models to authoritative external databases, enabling the models to cite sources and deliver more accurate, trustworthy responses.

Non-discrimination: Minimizing bias

AI models are trained by humans, often using datasets that are limited in size, scope, and diversity, which can introduce biases. For example, Amazon abandoned its use of a hiring algorithm when it found it favored applicants with resumes that contained words like “executed” or “captured”, which were more commonly found in men’s resumes.

To ensure AI benefits all people and communities, reducing unwanted bias in AI systems is crucial.

In addition to adhering to government regulations and antidiscrimination laws, trustworthy AI developers seek out patterns in AI output that may indicate potential bias or inappropriate use of sensitive characteristics in their algorithms. AI transparency and XAI are critical tools in this effort.

While racial and gender biases are well-recognized, subtler forms — such as cultural bias or bias introduced during data labeling — are also important to address. To mitigate bias, developers must incorporate a broader range of variables into their models. But they can also do more.

To go the extra mile, organizations can also use synthetic datasets to reduce bias. For instance, if training data underrepresents rare scenarios, such as extreme weather or traffic accidents, synthetic data can help diversify the dataset, making the AI model more accurate, reflective of the real world, and able to respond to changing conditions and long-term trends.

How RecordPoint can help

Now that we’ve outlined the key elements of trustworthy AI, let’s look at how we help customers achieve them.

Privacy

RecordPoint enables customers to safeguard sensitive information to comply with regulations through:

  • Comprehensive data governance: RecordPoint helps organizations manage and govern sensitive data like PII and PCI with built-in compliance features. This ensures data is handled responsibly, both legally and ethically.
  • Regulatory compliance: RecordPoint enables organizations to comply with privacy regulations like GDPR, the EU AI Act, and emerging AI-specific state laws in places like California, Utah, and Colorado. These laws emphasize the importance of safeguarding sensitive information and ensuring AI operates within legal frameworks.
  • Data minimization: The platform offers tools to reduce data exposure by identifying and removing unnecessary or outdated information, minimizing the risk of mishandling sensitive data and ensuring only relevant data is used in AI models.

Safety and security

While RecordPoint is not a “security platform” itself, the platform offers data security posture management (DSPM) and enables organizations to keep their data safe.

  • Access controls and permissions: RecordPoint helps organizations implement strict access controls within the RecordPoint platform and for those using third-party AI platforms, ensuring that sensitive data and access to AI platform usage is only accessible to authorized personnel. This reduces the risk of data misuse and reinforces AI safety protocols.
  • Audit trails: The platform provides clear, auditable trails that document how data is used, processed, and governed, enabling organizations to quickly detect and mitigate any potential misuse of AI systems.

Transparency

RecordPoint’s focus on data lifecycle management enables transparency and XAI outcomes, allowing organizations to understand their data, and use this understanding to build a picture of how the AI model makes decisions.

  • Data lineage and auditability: RecordPoint ensures complete transparency in how data is collected, processed, and used in AI models, enabling stakeholders to understand the journey of the data. This level of transparency helps users trust AI outputs and decisions.
  • Support for Explainable AI (XAI): The platform promotes transparency by providing tools to ensure that stakeholders can understand how AI models make predictions and decisions.
  • Stakeholder communication: RecordPoint ensures that transparency and explainability are communicated effectively to various audiences, from internal stakeholders to external regulators, enhancing trust in AI systems.

Moving beyond AI hype and building trust

No GenAI model is flawless. Beyond the risk of hallucinations, there are issues with data privacy, data security, data quality, and bias. The first step in building trustworthy AI into your organization is to acknowledge these problems, then work on mitigating them, and being transparent with your customers.

GenAI has enormous potential for increasing productivity. It’s easy to get carried away by the hype and ignore the risk. As we’ve seen, this can lead to incidents that range from embarrassing to disastrous. Avoid making your business the latest cautionary tale and take steps towards strong AI governance.

Learn more about how investing in XAI and compliance is your essential next step in your AI journey.

Discover Connectors

View our expanded range of available Connectors, including popular SaaS platforms, such as Salesforce, Workday, Zendesk, SAP, and many more.

Explore the platform

Protect customer privacy and your business

Know your data is complete and compliant with RecordPoint Data Privacy.

Learn More
Share on Social Media
bg
bg

Assure your customers their data is safe with you

Protect your customers and your business with
the Data Trust Platform.