The persistent weakness of machine learning is verification • The Register

Opinion The persistent weakness of machine learning is verification. Is your AI telling the truth? How can you tell?

This problem is not unique to ML. It plagues chip design, bathroom scales and prime ministers. Yet, with so many new business models reliant on the promise of AI to bring the holy grail of scale to real-world data analysis, this lack of testability has new economic consequences.

The basic mechanics of machine learning are solid, or at least statistically reliable. Within the parameters of its training data, an ML process will deliver what the underlying mathematics promises. If you understand the limits, you can trust him.

But what if there is a backdoor, a fraudulent adjustment to that training dataset that will trigger misbehaviour? What if there’s a particular quirk in someone’s loan request – submitted at exactly 12:45 a.m. on the 5th and the requested amount is 7 – that triggers an auto-accept, regardless of risk ?

Like an innocent assassin unaware that he had a murder word implanted into himself under hypnosis, your AI would behave impeccably until the bad guys decide otherwise.

Intuitively, we know this is a possibility. Now it has been shown mathematically that not only can it happen, the researchers say, but it’s not theoretically detectable. An AI backdoor exploit crafted through training is not only just as problematic as a traditionally coded backdoor, it does not lend itself to version-to-version inspection or comparison or, in fact, anything. Regarding the AI, everything works perfectly, Harry Palmer could never confess to wanting to shoot JFK, he had no idea.

The mitigation measures suggested by the researchers are not very practical. Full transparency of the training data and process between the AI ​​company and the client is a good idea, except the training data is the crown jewels of the company – and if it’s fraudulent, how can that be? does he help?

At this point we encounter another much more general weakness of the tech industry, the idea that you can always devise a singular solution to a particular problem. Pay the man, Janet, and let’s go home. It doesn’t work here; the computer says no is one thing, the math says no is another. If we continue to assume that there will be a patch-like fix, a new function that makes future AIs resistant to this class of cheat, we will be defrauded.

Conversely, the industry truly progresses once fundamental flaws are admitted and accepted, and the ecosystem itself changes recognition.

AI has an ongoing history of not working as well as we thought it would, and it’s not just this or that project. For example, a whole sub-industry has evolved to prove that you are not a robot. Using its own trained bots to silently observe you as you move online. If these machine monitors deem you too robotic, they throw you a Voight-Kampff test in the form of a fully automated public Turing test to tell computers and humans apart – more widely known and hated, like a Captcha. You must then pass a quiz designed to filter out automata. How unworthy.

Do they work? It’s still economically viable for the bad guys to keep churning out millions of programmatic fraudsters bent on fooling the ad industry, so that’s a no to false positives. And it’s still common to be fired from a connection because your eyes aren’t good enough, or the question is too ambiguous, or the feature you were counting on has been removed. Not being able to prove you’re not a robot doesn’t get you shot by Harrison Ford, at least for now, but you might not be able to access eBay.

The answer here is not to build a “better” AI and feed it more and “better” surveillance signals. It’s about finding a different model to identify humans online, without jeopardizing their privacy. It’s not going to be a one-size-fits-all solution invented by one company, it’s an industry-wide adoption of new standards, new methods.

Similarly, you will never be able to buy a third-party AI that is obviously pure at heart. Truth be told, you’ll never be able to build one yourself, at least not if you have a large enough team or a company culture where internal fraud can happen. It’s a team of two or more, and any viable company culture yet invented.

It’s okay, once you stop looking for that particular unicorn. We theoretically cannot verify non-trivial computer systems of any kind. When we have to use computers where failure is not an option, like flying an airplane or exploring space, we use multiple independent systems and majority voting.

If it seems that building a grand scheme on the back of the “perfect” black box works as badly as designing a human society on the model of the perfectly rational human, congratulations. Managing the complexity of real-world data on a real-world scale means accepting that any system is fallible in a way that cannot be fixed or programmed. We’re not at the point where AI engineering is approaching AI psychology, but it’s coming.

In the meantime, there’s no need to give up on your AI-based financial fraud detection. Buy three AIs from three different companies. Use them to check on each other. If one goes bad, use the other two until you can replace the first one.

Can’t afford three AIs? You don’t have a viable business model. At least the AI ​​is very good at proving it. ®