“One of the most significant developments in technology—and society in general—over the next several years will likely be the use of smart machines to replace human labor.”
This is how Tom Davenport, author of Competing on Analytics, opened a new article for Deloitte on cognitive technology. Davenport’s focus in the article is less on technology than on how humans can keep their jobs in the era of “bionic brains.” As he puts it, “We need to identify ways in which smart humans can augment the work of smart machines, and vice-versa.”
As one of the people building the next generation of smart machines, I take his point. I unequivocally believe that smart machines do and will augment human work.
But I don’t think of this as a competition between people and automation. Smart machines aren’t the problem. The problem is that machines are not smart enough — yet. That’s the focus for my team’s research efforts: Making better and better analytics that make machines smarter and provide greater benefit to all of us.
Here are three ways we’re making smart machines smarter.
Adaptive models – learning from results
An adaptive model is exactly what it sounds like — a model that learns as it goes. In technical terms, this is an analytic layer that can be “bolted” onto a traditional neural network, such as we use to detect fraud. The addition of this layer makes fraud detection more sensitive to real-time changing fraud patterns and thus able to real-time adjustments to respond to new schemes and emerging trends more quickly.
The adaptive layer’s responsiveness comes from dynamic feature selection and weighting. Based on recent dispositions of referred fraud cases — in other words, a feedback loop on what anomalous activity was actually determined to be fraud — the model selects relevant features from a superset of candidate variables, which may include new features not used by the base neural network model. It also adjusts the weightings of connections between features to further tune scoring to make the model smarter based on what is currently happening in the production environment.
Self-calibrating models – adjusting to new situations
FICO developed a patented technology for detecting suspicious behavior in “command-and-control” relationships, such as between a “bot” on a malware-infected computer and the “bot master” on a server somewhere giving it commands. The concept is similar to how the brain controls the body and its actions through synapses. The model not only spots known patterns used by malware to connect to command-and-control infrastructures, but also determines unusual computer behavioral activity.
To do this in cybersecurity, we’re using a multi-layered self-calibrating outlier model, a technology we have already deployed for FICO fraud detection. Shown as a high-level diagram in the figure below, this analytic architecture resembles a neural network. There’s a hidden layer, where weighted connections between data variable features are made, adjusted and tested.
Self-Calibrating Model Architecture
The difference is that in this hidden layer, the nodes are self-calibrating outlier models working in parallel. Each connects variable features in different ways to examine different relationships between them, then scores these relationships for how unusual they are (i.e., how much of an outlier compared to a peer group). These scores are then fused into an overall score indicating relative threat level.
Multi-layer self-calibrating outlier analytics have the advantage of requiring less labeled data for model development. Instead of having to be trained with months of historical data to recognize normal and abnormal values for data features, self-calibrating models infer these values in real time from the stream of transactions. That ability to learn on the fly makes them effective for new applications where historical data may not exist, for markets where available data may be of low quality and for any environment where behavior is rapidly changing. These models can work with zero historical data, and learn distributions of features and connections between variables on the fly to rank-order behavioral anomalies, making them powerful self-learners based on data that stream through them.
Self-organizing genetic algorithms – evolving a species
Self-organizing genetic algorithms, which could be applied to cyber security and other business areas, are based on reinforcement learning methods. There are strong parallels here with natural selection — in fact, we can use the metaphor of an ant colony to describe how this technology works.
The general idea is that a group of dumb software agents (like individual ants) interact with their environment and are rewarded or penalized around a small set of success criteria. Gradually “genes” of successful behavior emerge as the agents begin to map out the risk of various inter-related activities. Those with few successful genes receive a low “fitness” score and die out, whereas those with many successful genes score high and are allowed to reproduce or combine with other high-scoring agents. In this way, the overall performance of the group increases.
How Self-Organizing Genetic Algorithms Learn
Because the environment is changing, agents not only act in the optimal way based on their current best “map of the world,” they also experiment. Using probabilities, they make slight variations around the optimal strategy and associated genes, and as they receive rewards and penalties, learn from these experiments and adjust to a changing fitness landscape.
These three technologies don’t just make smarter machines — they make their human masters more successful. (I apologize if that term is offensive to any machines reading this blog.)
In my next post on this subject, I’ll talk about how we’re training analytics to develop a fundamentally human trait: curiosity.
For more information, see the Insights paper on “Does AI + Big Data = Business Gain?”