Enterprises need to think beyond GPUs for agentic AI, analysts say

Because Agentic AI involves a different computing model than genAI training on GPUs, enterprises need to consider the hardware options and pricing models available through cloud providers.

[…Keep reading]

Fleet hopes to be the MDM provider for the AI Era

Fleet hopes to be the MDM provider for the AI Era

Because Agentic AI involves a different computing model than genAI training on GPUs, enterprises need to consider the hardware options and pricing models available through cloud providers. “It’s more about model management than about model building — and the CPU is critical in providing workflow management,” said Jack Gold, principal analyst at J. Gold Associates.

Pricing variations continue to be an issue. Straight CPU compute is not billed the same as heavy GPU use, making it difficult to nail down costs, Gold said. “GPUs in training use more electricity generically due to near 100% utilization in a training workload, whereas in general-purpose compute, servers and CPUs run more like 40% to 60% utilization,” he said. “But it’s highly variable depending on what the agent is doing.”

Gold predicts that 80% to 85% of AI workloads will move to inference in the next two to three years, especially as tools become more agentic. “CPUs take on a major significance in making everything work. It’s why all the hyperscalers are now loading up on CPUs, not just GPUs,” Gold said.

About Author

What do you feel about this?

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.