A CIO primer on addressing perceived AI risks

The silent AI-based threat: Artificial human frailties
There’s one more class of risk to worry about, one that receives little attention. Call it “artificial human frailties.”
Start with Daniel Kahneman’s Thinking, Fast and Slow.

[…]

A CIO primer on addressing perceived AI risks

The silent AI-based threat: Artificial human frailties

There’s one more class of risk to worry about, one that receives little attention. Call it “artificial human frailties.”

Start with Daniel Kahneman’s Thinking, Fast and Slow. In it, Kahneman identifies two ways we go about thinking. When we think fast, we use the cerebral circuitry that lets us identify each other at a glance, with no delay and little effort. Fast thinking is also what we do when we “trust our guts.”

When we think slow, we use the circuitry that lets us multiply 17 by 53 — a process that takes considerable concentration, time, and mental effort.

In AI terms, thinking slow is what expert systems, and for that matter, old-fashioned computer programming, do. Thinking fast is where all the excitement is in AI. It’s what neural networks do.

In its current state of development, AI’s form of thinking fast is also what’s prone to the same cognitive errors as trusting our guts. For example:

Inferring causation from correlation: We all know we aren’t supposed to do this. And yet, it’s awfully hard to stop ourselves from inferring causality when all we have as evidence is juxtaposition.

As it happens, a whole lot of what’s called AI these days consists of machine learning on the part of neural networks, whose learning consists of inferring causation from correlation.

Regression to the mean: You watch The Great British Baking Show. You notice that whoever wins the Star Baker award in one episode tends to bake more poorly in the next episode. It’s the Curse of the Star Baker.

Only it isn’t a curse. It’s just randomness in action. Each baker’s performance falls on a bell curve. When one wins Star Baker one week, they’ve performed at one tail of the bell curve. The next time they bake they’re most likely to perform at the mean, not at the Star Baker tail again, because every time they bake, they’re most likely to perform at the mean and not the winning tail.

And yet, we infer causation — the Curse!

There’s no reason to expect a machine-learning AI to be immune from this fallacy. Quite the opposite. Faced with random process performance data points we should expect an AI to predict improvement following each poor outcome.

And then to conclude a causal relationship is at work.

Failure to ‘show your work’: Well, not your work; the AI’s work. There’s active research into developing what’s called “explainable AI.” And it’s needed.

Imagine you assign a human staff member to assess a possible business opportunity and recommend a course of action to you. They do, and you ask, “Why do you think so?” Any competent employee expects the question and is ready to answer.

Until “Explainable AI” is a feature and not a wish-list item, AIs are, in this respect, less competent than the employees many businesses want them to replace — they can’t explain their thinking.

The phrase to ignore

You’ve undoubtably heard someone claim, in the context of AI, that, “Computers will never x,” where x is something the most proficient humans are good at.

They’re wrong. It’s been a popular assertion since I first started in this business, and it’s been clear ever since that no matter which x you choose, computers will be able to do whatever it is, and do it better than we can.

The only question is how long we’ll all have to wait for the future to get here.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.