The last thing most CIOs need is an AI plan

Dr. Yeahbut says: I can get behind this advice, and augment it, too. When it comes to AI, organizations will face a choice between relegating humans to the care and feeding of the organization’s AIs and its AIs augmenting human capabilities.

[…]

The last thing most CIOs need is an AI plan

Dr. Yeahbut says: I can get behind this advice, and augment it, too. When it comes to AI, organizations will face a choice between relegating humans to the care and feeding of the organization’s AIs and its AIs augmenting human capabilities. AI can lead to either of two radically different business cultures — one dehumanizing, the other empowering.

Expert No. 5: “Create plans on a per-department basis.”

Dr. Yeahbut says: No, I don’t think so. Organizations will get a lot more mileage by creating a cross-functional AI brain trust whose members try a bunch of stuff and share it with one another. Repurposing the company org chart to organize the company’s AI efforts would encourage AI-powered silo-based dysfunction and little else.

Expert No. 6: “Consider how AI can enhance productivity.”

Dr. Yeahbut says: No, no, no, no, no! Consider how AI can enhance effectiveness.

Productivity is a subset of effectiveness. It’s for assembly lines. AI-augmented humanity (above) is about better-applying human knowledge and judgment to address complex challenges we humans can’t fully address on our own. AI-augmented humans could be more effective, whether they work on an assembly line or at a desk surrounded by data.

Expert No. 7: “Focus on removing bottlenecks.”

Dr. Yeahbut says: Well, okay, maybe. Rewind to the top and start with the class of problem you want AI to address. If current-state processes suffer from bottlenecks and it’s process optimization you need, then have at it. Chicken, meet egg.

Expert No. 8: “Make sure the plan includes security controls.”

Dr.  Yeahbut: Saying “security controls” is the easy part. Figuring out how and where to deploy AI-based countermeasures to AI-based threats? That will be an order of magnitude harder.

Then our Expert added: “The power of leveraging AI is the ability to turn over a level of control, allowing advanced learning techniques to see patterns and make decisions without human oversight.” This goes beyond reckless. We’re nowhere near ready for unsupervised AI, not to mention the “volitional AI” it could easily turn into.

Expert No. 9: “Find an easy problem to solve (and an easy way to solve it).”

Dr. Yeahbut says: See “Start small,” above.

Expert No. 10: “Tap into the ‘wisdom of crowds.’”

Dr. Yeahbut says: The full text of this advice suggests tapping into what your employees, partners, suppliers, and others know. And yes, I agree. Do this, and do it all the time.

It has nothing at all to do with AI, but do it anyway.

Expert No. 11: “First address needed changes to company culture.”

Dr. Yeahbut says: It would be nice if you could do this. But you can’t. Culture is “how we do things around here.” Which means AI-driven culture change can only co-evolve with the AI deployment itself. It can’t precede it because, well, how could it?

Expert No. 12: “Ensure there’s value in each anticipated use case.”

Dr. Yeahbut says: No. Don’t do this. To achieve it you’d have to create an oversight bureaucracy that’s less knowledgeable about the subject than the cross-functional team we’ve already introduced (see above; Departmental AI). It’s a cure that would be far worse than the disease.

Expert No. 13: “Outline the project’s value and ROI before implementation.”

Dr. Yeabut: Noooooooooooooo! Don’t do this. Or, rather, ignore the part about ROI. Insisting on a financial return ensures tactical-only efforts, an oversight bureaucracy, and more time spent justifying than you’d spend just doing.

Organizations will have to learn their way into AI success. If each project must be financially justified, only a fraction of this learning will ever happen.

Expert No. 14: “Define your business model and work from there.”

Dr. Yeahbut says: Yes, business leaders should know how their business works. If they don’t, AI is the least of the company’s problems.

Expert No. 15: “Develop a strategy that drives incremental change while respecting the human element.”

Dr. Yeahbut says: Incremental change? We’ve already covered this. Respecting the human element? Sounds good. I hope it means focusing on AI-augmented humanity.

Expert No. 16: “Include principles for ethical AI usage.”

Dr. Yeahbut says: I’m in favor of ethics. I’m less in favor of crafting a new and different ethical framework just for AI.

Think of it this way: Before phishing attacks, theft was considered unethical. After someone invented phishing attacks, theft continued to be unethical. Ethics isn’t about the tools. It’s about what you decide to use them for, and how you decided.

Is Dr. Yeahbut being too mean to this panel of experts? Mebbe so. But he’s pretty sure of one thing: It’s early in the AI game — so early that there are no experts yet.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.