Once you have reviewed a model from the perspective of data suitability, data scientists often want to go a step deeper and understand how well the classification model is performing.
In line with industry best practices, Rev.Up propensity models use binary classification as the basis for propensity estimation. The raw model output is compared with the expected conversion rate that is based on past data. In Rev.Up, this expected conversion rate is shown as Lift - the conversion enhancement scale factor.
Example: If the conversion rate of accounts that get the A rating from the model is expected to be 5%, while the average conversion rate is 2.5%, then the lift of accounts with the A rating is 2x (5 / 2.5 = 2).
Rev.Up does this analysis automatically for every model. 20% of the training data is held out from the model learning to use for model evaluation and calibration analysis.
You can review this analysis by opening a model, clicking on one of the model iterations and then clicking on the Performance tab.
The most common metrics used to evaluate classifier performance are Precision and Recall measured for the model testing data. These metrics are easy to read for any model on the Performance page.
Step 1. Navigate to Model Dashboard, click View Model for the iteration you want to review, then click the Performance tab.
Step 2. In the Cumulative Conversions chart that shows conversions for the model testing data, click and drag on the white dot to choose the score threshold for the evaluation. Precision and Recall for a model depend on the score threshold you choose for deciding between the "predicted positive" or "predicted negative" categories.
Step 3. Calculate the Precision. The Precision is the Lift multiplied by Conversion Rate. Take the Lift of positive events from the Cumulative Conversions chart, and the total conversion rate percentage from the banner at the top of the page. Rev.Up reports the Lift most prominently because Lift is often a more useful metric to share with business stakeholders. In this example, the precision is 10.71 * 2.01 = 21.5.
Step 4. Read the Recall. The Recall for the model is shown as % Total Conversions on this page.