3 hard truths about GenAI’s large language models

I love technology. During the last year, I’ve been fascinated to see new developments emerge in generative AI large language models (LLMs). Beyond the hype, generative AI is truly a watershed moment for technology and its role in our world.

[…]

3 hard truths about GenAI’s large language models

I love technology. During the last year, I’ve been fascinated to see new developments emerge in generative AI large language models (LLMs). Beyond the hype, generative AI is truly a watershed moment for technology and its role in our world. Generative AI LLMs are revolutionizing what’s possible for individuals and enterprises around the world.

However, as enterprises race to embrace LLMs, there is a dark side to the technology. For enterprises to fully unleash the potential of generative AI and large language models, we need to be frank about its risks and the rapidly escalating effects of those risks. That way enterprises can select the proper approach, deployment, and use cases to mitigate LLMs’ risks before they cause harm—albeit unintentionally—to individuals, organizations, and beyond.

While general-purpose LLMs, like ChatGPT, Google Bard, and Microsoft Bing, are increasingly used by organizations, the stakes skyrocket. Potential negative consequences include the threats of influencing political outcomes, enabling wrongful convictions, generating deepfakes, and amplifying discriminatory hiring practices. That’s serious.

The root cause lies in three hard truths about generative AI LLMs: Bias, discrimination, and fact or fiction.

Bias

By their very nature, generative AI LLMs are inherently biased. That’s because LLM algorithms are trained on massive text-based datasets, such as millions or billions of words from the Internet and other published sources. Data volumes of this magnitude cannot be checked for accuracy or objectivity by the LLM architects. And because the data is largely based on the Internet, it contains human bias, which then becomes part of the LLM algorithms and output. 

But baked-in generative AI LLM bias can be worse than human bias.

For example, a recent study showed that OpenAI’s ChatGPT has a notable left-wing bias. Researchers shared findings of a “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,” (Lula is Brazil’s leftist president, Luiz Inácio Lula da Silva).

This raises the potential impact of LLM bias to new levels. As political parties increasingly use LLMs to generate fundraising, create campaign emails, and write ad copy, this inherent bias can sway political outcomes, elevating its impact to a national and global level. 

Discrimination

The use of generative AI LLMs has been piloted in talent acquisition applications. Gaps in data and existing human-based social stereotyping can be encoded in the data used to train the models and create risks. In talent acquisition, generative AI LLM can erode—and even reverse—the positive progress made in the areas of diversity, equity, and inclusion.

For example, a few years back, Amazon’s automated hiring tool was discarded upon discovering that it discriminated against female candidates. In another example, Meta’s LLM system had to be shut down by the company three days after launch because it generated biased and false information. Another LLM that generates images based on AI says CEOs are white males, doctors and lawyers are not female, and dark-skinned males commit crimes.

Left unchecked, LLM outcomes like these can have grave consequences. In talent acquisition, biased LLM outcomes could negatively and unfairly impact hiring decisions, altering an organization’s workforce, and hampering business outcomes. Even more, the negative ethical and social effects of biased data and discrimination based on race or gender can quickly outpace the organizational impacts. 

Fact or fiction

Large language models identify patterns from text-based data to generate output. However, LLMs cannot create higher-order reasoning from their data or pattern recognition. That means LLMs—while valuable in distinct use cases—have a deceptive intelligence because their knowledge is limited to pattern recognition. In other words, generative AI LLMs cannot distinguish between fact or fiction. This hard truth can lead to deepfakes.

Deepfakes can use text-to-image generative AI technology to intentionally create false content—such as fake audio or video content. This disinformation can be used to mislead communities with fake emergencies, misrepresent politicians to influence elections, and introduce bias that causes unfair treatment. For example, AI text-to-image LLMs can generate suspected criminal sketches where inherent biases could result in erroneous convictions.

Solution: Purpose-built models

Good things, such as generative AI LLMs, often come with some downsides. In the case of generative AI LLMs, the potential downsides are serious and far-reaching. For enterprises, the solution lies in purpose-built generative AI LLM models, including build-from-scratch approaches or ones that use proprietary enterprise data.

Purpose-built models are tailored to specific organizational needs and distinct use cases. They differ from general-purpose LLMs in that they are trained and tuned to solve specific challenges, such as financial forecasting or customer support, and modeled with smaller data sets. 

In short, purpose-built models provide agility, security, and performance and aim to accelerate the responsible enterprise deployment of generative AI. That helps enterprises realize the revolutionary potential offered by generative AI LLMs so they can capitalize on technology’s defining moment. 

Read more about purpose-built generative AI LLMs

Bringing AI Everywhere – Intel

Responsibly Harnessing the Power of AI

Unlocking the Potential of GenAI

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.