AI Is Altering the Perspective of Businesses on Confidence: Views from Deloitte & SAP

Should you be devising or tailoring an AI policy or reconsidering your company’s trust approach, maintaining customers’ trust can become progressively tricky with the uncertainty of generative AI in the equation.

AI Is Changing the Way Enterprises Look at Trust: Deloitte & SAP Weigh In

Should you be devising or tailoring an AI policy or reconsidering your company’s trust approach, maintaining customers’ trust can become progressively tricky with the uncertainty of generative AI in the equation. Our discussions with Deloitte’s Michael Bondar, a principal and leader in enterprise trust, and Shardul Vikram, the chief technology officer and head of data and AI at SAP Industries and CX, shed light on how businesses can uphold trust in the era of AI.

Enterprises gain from trust

To start, as per Bondar, each enterprise must define trust within the context of their particular requirements and clientele. Deloitte provides resources for this, such as the “trust domain” system present in certain frameworks available for download at Deloitte.

Businesses aspire to earn the trust of their clients, yet individuals involved in trust discussions often struggle to pinpoint the exact meaning of trust, he noted. Deloitte’s research discovered that trusted companies demonstrate improved financial outcomes, enhanced stock performance, and increased customer allegiance.

“Additionally, around 80% of employees are driven to work for an employer they trust,” highlighted Bondar.

Vikram outlined trust as the belief in an organization acting in the best interest of its customers.

When assessing trust, customers inquire: “How reliable are those services in terms of uptime? Are the services secure? Can I rely on that particular partner to safeguard my data, ensuring compliance with local and global standards?” Vikram articulated.

According to Deloitte, trust “commences with a blend of expertise and intent, signifying that the organization possesses the capability and dependability to fulfill its commitments,” Bondar mentioned. “However, the reasoning, the motivation, the underlying reasons for those actions must align with the values and expectations of the various stakeholders, with a touch of humanity and transparency embedded in those actions.”

Instances where enterprises struggle to enhance trust are often linked to “geopolitical turbulence,” “socio-economic stresses,” and “hesitation” surrounding novel technologies, noted Bondar.

Generative AI could undermine trust without customer awareness

When contemplating new technologies, Generative AI stands out. Bondar emphasized that if a company opts to utilize generative AI, it must exhibit robustness and reliability to safeguard trust.

“Privacy is paramount,” he emphasized. “Respecting consumer privacy and ensuring that customer data is utilized solely for its intended purposes are imperative.”

This involves each stage of AI utilization, ranging from initial data collection during the training of extensive language models to granting consumers the choice to abstain from utilizing their data in any AI-related manner.

In fact, the process of training generative AI and pinpointing flaws could serve as an opportunity to eliminate outdated or irrelevant data, Vikram proposed.

SEE: Microsoft Postponed Its AI Recall Feature’s Launch, Requesting Further Community Input

He recommended the following strategies for upholding trust with customers during AI adoption:

  • Impart training to employees on safe AI usage. Focus on simulation exercises and digital literacy. Consider your organization’s beliefs regarding data integrity.
  • Seek permission for data usage and/or intellectual property compliance when creating or collaborating on a generative AI model.
  • Embed AI metadata in content and educate employees to identify AI-generated material whenever feasible.
  • Offer a comprehensive view of your AI models and capabilities, ensuring transparency in AI utilization.
  • Establish a trust center. This center acts as a “digital-visual bridge between the organization and its customers, wherein teachings, sharing of latest threats, practices, and successful use cases have proven highly beneficial when executed correctly,” as highlighted by Bondar.

CRM firms are likely compliant with regulations — such as the California Privacy Rights Act, the EU’s General Data Protection Regulation, and the SEC’s cybersecurity guidelines — that could influence their use of customer data and AI.

Approaches by SAP to instill trust in generative AI products

“At SAP, our DevOps team, infrastructure units, security squad, and compliance group are deeply integrated into each product team,” mentioned Vikram. “This ensures that trust considerations are integral right from the start of every product and not an afterthought.”

SAP operationalizes trust by fostering connections between teams and adhering to the company’s ethics policy.

“We have a rule that nothing can be released until it’s sanctioned by the ethics committee,” relayed Vikram. “It goes through quality assessments… Security counterparts must approve it. This process layers an operational check over the quality aspects, combining to help us implement or enforce trust.”

When SAP introduces its generative AI products, these same protocols are followed.

Several generative AI products, including the CX AI Toolkit for CRM, which can generate and revise content, automate tasks, and analyze corporate data, have been debuted by SAP. Vikram mentioned that the CX AI Toolkit always discloses its sources during information retrieval, underscoring SAP’s efforts to cultivate trust among AI product users.

Incorporating generative AI into the organization with reliability

Overall, companies need to integrate generative AI and trustworthiness into their key performance indicators.

“With AI in play, especially generative AI, customers seek additional KPIs or metrics, like: How do we integrate trust, transparency, and auditability into the generative AI system’s outcomes?” Vikram stated. “These systems are, by nature, non-deterministic to a high degree.

“And thus, to leverage such capabilities within my enterprise applications, in my revenue channels, I must establish a baseline of trust. At a minimum, what are we doing to minimize errors or to deliver precise insights?”

C-suite executives are keen to experiment with AI, Vikram observed, but they prefer commencing with a few specific use cases initially. The rapid pace of AI product launches might conflict with this desire for a cautious approach. Concerns around errors or subpar content quality are prevalent. For example, generative AI utilized for legal functions is rife with widespread error instances.

Nevertheless, companies are eager to delve into AI applications, Vikram emphasized. “I’ve been crafting AI solutions for the past 15 years, and the scenario was never identical. There has never been this growing appetite, not just for knowledge but for actively engaging more with it.”

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.