Desiring Return on Investment from genAI? Reconsider the essence of both concepts
Rather than instructing the AI to provide suggestions for optimal store launch locations, Levine proposes that it would be more beneficial for the retailer to encode detailed and specific criteria on how it currently assesses new sites.
Rather than instructing the AI to provide suggestions for optimal store launch locations, Levine proposes that it would be more beneficial for the retailer to encode detailed and specific criteria on how it currently assesses new sites. By doing so, the software can adhere to these guidelines, thereby reducing the likelihood of errors.
Would a company ever task a new hire with simply determining the next 50 store locations without proper guidance? Unlikely. The company would invest time in training the new employee on what to consider and where to search, providing numerous examples of past practices. If a manager doesn’t expect a new employee to find solutions independently without thorough training, why would they assume genAI can perform any better?
Considering that ROI equates to delivered value minus costs, the key to enhancing value lies in improving the accuracy and user-friendliness of the responses. This sometimes involves avoiding broad requests to genAI and observing its choices. While this approach may be effective in machine learning, genAI operates differently.
To be equitable, there are indeed scenarios where granting genAI freedom to decide is logical. However, in the vast majority of cases, IT departments will achieve superior outcomes by investing effort into adequately training genAI.
Constraining genAI Initiatives
With the initial excitement surrounding genAI subsiding, it’s crucial for IT leaders to safeguard their organizations by concentrating on deployments that genuinely benefit the company, according to AI strategists.
One approach to enhance control over generative AI endeavors is for companies to establish AI panels comprising specialists from diverse AI fields, suggested Shah from Snowflake. This would require any generative AI proposal within the organization to be reviewed by this panel, granting them the authority to veto or approve any proposals.
“Given the potential pitfalls associated with generative AI efforts in terms of security and legality, executives would need to present their proposals in front of the committee and provide detailed justifications,” he noted.
Shah perceives these AI approval panels as interim measures. “As our comprehension matures, the necessity for such committees will diminish,” he remarked.
Another recommendation from Fernandes at NILG.AI is to steer clear of grandiose genAI projects and focus on smaller, more manageable objectives such as “evaluating damage reports of vehicles and estimating costs or auditing sales calls to assess adherence to scripts or suggesting e-commerce products based on product descriptions rather than mere interactions/clicks.”
Moreover, in place of blind faith in genAI models, “we should avoid relying solely on LLMs for critical tasks without fallback options. Instead of treating them as absolute truth for decision-making, they should be viewed as educated assumptions, akin to considering another individual’s opinion.”
