IBM CEO: Smaller, domain-specific genAI models are the future

As well as being simpler to deploy and customize, smaller AI models are as much as 30 times less expensive to run than more conventional LLMs, he said.

[…Keep reading]

Nvidia, ServiceNow engineer open-source model to create AI agents

As well as being simpler to deploy and customize, smaller AI models are as much as 30 times less expensive to run than more conventional LLMs, he said.

Just as the cost of storage and computing have dropped dramatically since the 1990s, AI technology will also become significantly cheaper over time, Krishna said. “As that happens, you can throw [AI] at a lot more problems,” he said. “There’s no law in computer science that says AI must remain expensive and large. That’s the engineering challenge we’re taking on.”

Krishna highlighted IBM’s Granite family of open-source AI models – smaller models with between 3 billion and 20 billion parameters — and how they compare to LLMs such as GPT-4, which has more than 1 trillion parameters. (OpenAI, Meta and other AI model builders are also focused on creating “mini” models of their larger platforms, such as GPT o3 and GPT o4 mini, and Llama 2 and Llama 3, all of which are reported to have 8 billion or fewer parameters.)

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.