Rule-based AI systems use algorithms based on statistical models or if-then decisions to make determinations. Complex systems, in contrast, use machine learning to autonomously recognize patterns and relationships in large amounts of data. Based on reference data and through multiple iterations, they are trained to perform specific tasks. The resulting model can then be applied to new data. Basically, three learning methods are distinguished: Supervised Learning, Unsupervised Learning, and Reinforcement Learning. Each of these requires differently prepared reference data sets, is best suited for different algorithms and will be used for distinct areas of application.
1. Supervised Learning
The goal is to recognize patterns and relationships in annotated or categorized data and apply them to new input data. This requires pre-labeled training datasets that contain observations for the target feature of the model. These can be generated, for example, from historical records such as data on loss events in loan risk models, but also by means of manual evaluation by experts, for example, the annotation of image data. The most important quality feature is the discriminatory power of the model. It will learn rather quick to distinguish data records with regard to a simple yes or no question, for example “Will the debtor in all probability pay back a loan?” In the training phase the dataset is used to iteratively tune an algorithmic model. With each iteration the model parameters are slightly changed and tested against target variables from the data set. If prediction errors decreased, the model is used as base for the next iteration. With this approach the prediction quality increases with each of the aforementioned iterations. However, there is the danger of so-called overfitting, i.e., an excessive adaptation of the algorithm to the training data sets. The consequence is a deteriorated prediction quality when used with real data. Therefore, measures have to be put in place to measure, monitor and prevent overfitting (e.g., through train/test-split frameworks, training data augmentation or synthetization).
The supervised learning method is suitable for predicting credit default risks, for example. In this case, the target variable is the default probability of a loan repayment within the next twelve months. A so-called logistic regression is typically used as the algorithm. Training data can be obtained from historical data on loan defaults.
Important: the model must work equally well in different economic phases, as the probability of loan defaults depends on the economic cycle. Data obtained during the worldwide pandemic will not work very well during a global growth phase and vice versa. Therefore, the training data sets must be statistically representative of the prospective application period of the model.
2. Unsupervised Learning
Without a given target variable, the AI models are trained by Unsupervised Learning. Here, predictions are made using unknown, previously uncategorized data. A typical example is so-called clustering, in which the algorithm forms groups with a high similarity of certain features from the existing data. The categories for this selection are not predefined, they are formed independently by the AI in the course of processing. The decisive factor is the respective distance measure to certain data points: Within a cluster, this is smaller than the distance measure to other groups.
Typical use cases for Unsupervised Learning are customer segmentation and anomaly detection. Thus, unsupervised AI algorithms can form groups from a mass of customer data with similar characteristics such as product usage, sales, or the like. Marketing measures or product developments can then be planned specifically for these customer groups, for example.
For the detection of deviating behavior patterns, the AI identifies such data sets whose pattern of data points does not correspond to that of the mass of data examined. Suitable algorithms are, for example, “local outlier factory” or “isolation forest”. One practical application is fraud prevention. There, anomaly detection is used to pre-select suspicious cases, which are then further scrutinized by humans.
3. Reinforcement Learning
This method works entirely without without training/test-data-sets. Reinforcement Learning uses simulated system states (based on real world or synthetic data) and rewards for correct predictions, resembling the human learning process. The algorithm learns how to evaluate the changes predicted, based on a reward system. Due to these properties, it is particularly well suited for cases in which the goal is known but the solution path is not. The AI generates the training data itself by attempting to arrive at the desired prediction during a simulation, doing many runs. These iterations proceed according to “trial & error.” The self-learning algorithm, also known as the agent, receives feedback for each action as defined in advance by a reward function. Subsequently, over time, the agent learns to assess the effect of its actions on the simulation environment and will develop a strategy for maximizing rewards in the long term.
“The algorithm learns how to evaluate the changes predicted, based on a reward system.”"
The development of Reinforcement Learning is much more complex than the other approaches. However, the predictions are also much more comprehensive. Prerequisites for application are problems that can be modeled and simulated as well as the existence of clear evaluation criteria. Corresponding AI systems could be used for urban traffic flow control, for example.
The simulation environment for reinforcement learning can be the road network of a city including the traffic light circuits. Using the traffic light phases as a control variable, the agent is then trained to either ensure high average speeds or short congestion times. The reward system is defined taking into account the desired result.
Choosing the right method
The learning method best suited for a desired AI application must therefore be decided on a case-by-case basis. The examples give a first indication. For a number of cases, proven algorithms are already available that support processes digitally. The AI experts at BCG Platinion are specialists in selecting the appropriate solution approaches from the three learning methods based on the problem and implementing them at high pace.
Find out more about AI here:
What You Always Wanted to Know About AI and MoreLearn More
A guide to how AI can really benefit your business
Model Accuracy in AI: How Accurate is Accurate Enough?Learn More
The second article of our AI series dives deep into quality and performance of models used.
AI: Detect and Avoid Bias at an Early StageLearn More
In article three of our AI series we present a method to succeed in minimizing bias to get realistic results from AI models.
AI: Data Quality: What to do When Errors Occur?Learn More
Article four of our AI-series puts a lense on the importance of the quality of data used.
AI: Success Factor Data PreparationLearn More
Article five of our AI-series highlights the right selection of data.