Blog New v2

How Knostic Maps to Gartner’s AI TRiSM Framework

Written by Knostic Team | Jan 13, 2025 5:00:00 AM

"Most will tell you that the benefits of GenAI outweigh the risks, and I'm sure they do.
But all you need is one bad incident to change that equation."

Avivah Litan, Distinguished VP Analyst at Gartner

What is TRiSM and Why Does it Matter?

Gartner originated and popularized the term, AI TRiSM (AI trust, risk, security management). The intent is to provide a framework that helps enterprises adapt and manage GenAI within a safe, secure, and ethical framework.

TRiSM addresses real-life concerns about AI like:

  • Is my data safe? How about my customers’ personal information (PII)?
  • How was the answer to my question generated? What are the sources?
  • Is my AI tool expressing bias that may lead to errors in hiring or financing?
  • Who will be responsible if using my AI model leads to legal issues?
  • If there is a data breach or a malicious attack, how/when will I be informed?
  • How do I make sure my employees have access to the correct data?

TRiSM has a goal to facilitate maximizing the enormous advantages of AI while reducing potential harm to an absolute minimum and complying with the new regulatory landscape.

 

The TRiSM PIllars

AI TRiSM stands on four pillars:

  1. Explainability
  2. Model operations
  3. AI Application security
  4. Privacy

These pillars ensure AI implementation risks are carefully weighed and effectively managed. As the business world continues to rapidly adopt AI systems, AI TRiSM will continue to evolve towards facilitating a balance of extraordinary benefits vs mitigated risks.

In the world of knowledge-centric controls for LLM-powered enterprise AI search, Knostic distinguishes itself by focusing on business context and dynamic need-to-know mapping. This approach is why Knostic aligns so well with Gartner’s taxonomy for visibility, remediation, monitoring, and protection.

Explainability

To keep the uncertainty, mistrust, and misunderstanding to a minimum, companies need to supply their internal and external consumers with clear and concise information regarding their AI models. They must present the model’s strengths and weaknesses and any potential bias. Knostic provides an unbiased assessment independent of the AI vendor of what potential oversharing and data leakage issues there are.

Model operations

Companies must provide continuous monitoring of AI tools to mitigate any operational risks or external attacks. Knostic can continuously monitor and remediate using the actual profiles of users within the enterprise.

AI Application Security

Companies must develop and implement security features before their AI systems are fully operational to ensure the integrity of the data accessed and shared by the users. An assessment by Knostic is an important first step to evaluating and addressing oversharing issues before AI is widely deployed.

Privacy

TRiSM assists in keeping confidential data protected from breaches and oversharing, whether it’s caused by internal users or third-party tools. For example, this would be particularly important to the healthcare industry that deals with sensitive patient data. By identifying and remediating oversharing and data leakage issues, Knostic ensures privacy risks are mitigated.

The Knostic Approach

The team at Knostic aims to address the challenges companies face when implementing TRiSM and to make the process as seamless as possible.

Our approach starts with an assessment to identify universally sensitive topics (HR, Payroll, etc.), topics considered sensitive for a particular industry, and topics of concern specifically for the organization being assessed. Knostic identifies overexposed data and maps the user experience by probing sensitive business topics. Sensitive information is labeled and classified, enabling automatic remediation with the option of manual review by the topic and business owners.

Through its knowledge-centric approach, Knostic avoids the harm-utility tradeoff by placing controls at the knowledge layer. Overshared content is highlighted for remediation and customers receive an immediate safety net for AI adoption.

LLM-based AI tools often overshare information, disregarding the "need-to-know" principle. This increases the risk of exposing sensitive or unauthorized content. Knostic captures the organization’s need-to-know policy. These need-to-know policies can be used to monitor chat logs for policy violations and alarming gaps.

Including Knostic as part of your journey deploying enterprise AI helps you build a better AI model, led by humans and powered by technology. Knostic aligns with Gartner’s principles of AI TRiSM to reap the benefits and bring the risks to an acceptable minimum. Get started now with an assessment by Knostic.

Further reading:

https://www.gartner.com/en/articles/what-it-takes-to-make-ai-safe-and-effective

https://www.knostic.ai/what-we-do

https://www.knostic.ai/blog/enterprise-ai-search-tools-addressing-the-risk-of-data-leakage