Keeping AI on a Leash

AI can see patterns in complex data, but some decisions are best made by humans.

Artificial Intelligence is now so central to cybersecurity that it would be difficult to mange the field without it. Hackers make new malware tools through AI in a kind of digital Darwinism that tests millions of variations a year, with increased effort devoted to the more successful attacks. Cybersecurity firms then employ AI defenses, automatically detecting suspicious activity and even providing self-patching routines in response.

As PathGuard discussed with Rebellion Research, the AI financial advisor and think tank, black-hat hackers and cybersecurity providers each turn to artificial intelligence in response to the actions of the other. An AI can probe systems for known flaws, even newly discovered ones, while another AI can spot intrusions the instant they occur. Speed, repetition, and pattern recognition—these are the strengths of AI systems in constant use through the digital world. 

But now that machine learning is employed in far more applications, the associated risks have broadened as well. AI controls the functions of many devices, and many of those devices face an invisible risk.

 

The Chicken and the AI Black Box

Modern AI relies primarily on deep learning, a set of algorithms inspired by the human brain. The term technically applies to AI run in a type of neural network, a series of thousands or millions of interconnected nodes, but it is often used to refer to the process of learning by any AI following the basic method of deep learning, which involves iterations of predictions run through huge quantities of data. Each time a prediction proves false, the algorithm tries another variation on its known dataset to see if it can improve.

The process is akin to the two-year training program of the Zen-Nippon Chick Sexing program in Japan.

Begun in the 1920s, the school was founded on the demand for female chickens. Male chickens, or cockerels, have little use for those wanting eggs, so there is value in distinguishing them early. But determining the sex of a chick is difficult, so difficult that there isn’t a standard method that can reliably note the difference. Even at the Zen-Nippon school, little explanation was given to trainees about what to look for when they turned a chick over to examine its underside.

Instead, each student paired with a master, who responded to a trainee’s guess on the sex of a given chick with a terse, ‘yes’ or ‘no.’ After many, many repetitions, a trainee learned to correctly identify a given chick, even if the reasons for doing so couldn’t be put into words.

Modern AI programs come up with algorithms, after trial and error, that are similarly difficult to express. We see the output, like in the mathematical ‘black box’ theory where the results are seen even when the process is not. But we don’t know the steps taken nor can we know in advance the result of feeding a new set of information to the AI.

That raises security concerns. When the AI of the new ‘closed loop’ insulin pumps works properly, Type 1 Diabetics and others dependent on insulin can get the medicine they need automatically. That automated delivery, though, raises the question of how to limit a dose to within safe ranges.

Those systems, available this year, have a variety of safety features to ensure accurate delivery. Users set a limit on the amount of insulin that can be delivered in a particular dose, for instance. But when that limit itself is regulated through software, what prevents the AI from changing the limit itself? Sandoxing is one method, and pumps can place the safety limits under control of a different program than the auto-dosing AI. Safer still is to place hardware limits on what an AI can and cannot do.

Fast and accurate as an AI can be, certain decisions are best made by humans. By putting the safety parameters in a section of memory separated from the reach of the AI, the pump—or any device— can require a human to change those essential settings.

 

 

Speed and Safety

Humans can thus set limits on the range of choices available to an AI while still gaining from the automation provided. A water treatment plant, for instance, may feature machine learning capable of learning that a community benefits from preemptive increases to purification levels during a storm, and operators can be sure that such increases will stay within acceptable levels… even if they can’t see within the black box of algorithms the AI uses to set new levels.

Hardware protections such as PathGuard can ensure safety and security parameters are set and maintained even while there are automatic changes to applications running any given systems. PathGuard enforces separation between the safety settings of a system and the side operated by machine learning. That way, humans and artificial intelligence programs each control the choices they make best.

Read More Articles Like This…