AMD introduces open-source OLMo LLM, aiming to rival AI industry leaders

Outstanding performance and successful benchmark results
Based on internal tests, AMD’s OLMo models demonstrated impressive performance compared to similar open-source models like TinyLlama-1.

[…Keep reading]

AMD rolls out open-source OLMo LLM, to compete with AI giants

Outstanding performance and successful benchmark results

Based on internal tests, AMD’s OLMo models demonstrated impressive performance compared to similar open-source models like TinyLlama-1.1B and OpenELM-1_1B in various multitasking and general reasoning evaluations. The company reported a notable performance boost exceeding 15% in tasks related to GSM8k, thanks to AMD’s advanced multi-phase supervised fine-tuning and Direct Preference Optimization (DPO) techniques.

According to AMD, OLMo displayed a competitive advantage of 3.41% in AlpacaEval 2 Win Rate and a 0.97% improvement in MT-Bench during multi-turn chat assessments when pitted against its nearest open-source counterparts.

Nevertheless, in the broader landscape of Language Model (LLM) technologies, Nvidia’s GH200 Grace Hopper Superchip and H100 GPU maintain their dominance, especially in handling extensive, diverse AI workloads. Nvidia’s emphasis on cutting-edge features like C2C link, designed to enhance data exchange efficiency between its CPU and GPU, positions Nvidia as a frontrunner by offering a speed boost for demanding inference tasks such as recommendation systems.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.