What LinkedIn learned leveraging LLMs for its billion users

One AI vendor CEO, Tarun Thummala, explains in a LinkedIn post unrelated to this project that LLM input and output tokens are roughly equivalent to 0.75 of a word. LLM vendors typically sell tokens by the thousands or millions.

[…]

What LinkedIn learned leveraging LLMs for its billion users

One AI vendor CEO, Tarun Thummala, explains in a LinkedIn post unrelated to this project that LLM input and output tokens are roughly equivalent to 0.75 of a word. LLM vendors typically sell tokens by the thousands or millions. Azure OpenAI, which LinkedIn uses, charges $30 for every 1 million 8K GPT-4 input tokens and $60 for every 1 million 8K GPT-4 output tokens out of its East US region, for example.

Evaluation challenges

Another functionality goal LinkedIn had for its project was automatic evaluation. LLMs are notoriously challenging to assess in terms of accuracy, relevancy, safety, and other concerns. Leading organizations, and LLM makers, have been attempting to automate some of this work, but according to LinkedIn, such capabilities are “still a work in progress.”

Without automated evaluation, LinkedIn reports that “engineers are left eye-balling results and testing on a limited set of examples and having a more than a 1+ day delay to know metrics.”

The company is building model-based evaluators to help estimate key LLM metrics, such as overall quality score, hallucination rate, coherence, and responsible AI violations. Doing so will enable faster experimentation, the company’s engineers say, and though LinkedIn’s engineers have had some success with hallucination detection, they haven’t been able to finish work in that area yet.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.